US 12,153,718 B2
Defense generator, method for preventing an attack on an AI unit, and computer-readable storage medium
Felix Assion, Berlin (DE); Florens Fabian Gressner, Berlin (DE); Frank Kretschmer, Cottbus (DE); and Stephan Hinze, Berlin (DE)
Assigned to Neurocat GmbH, Berlin (DE)
Appl. No. 17/768,440
Filed by NEUROCAT GMBH, Berlin (DE)
PCT Filed Oct. 13, 2020, PCT No. PCT/EP2020/078725
§ 371(c)(1), (2) Date Apr. 12, 2022,
PCT Pub. No. WO2021/074121, PCT Pub. Date Apr. 22, 2021.
Claims priority of application No. 102019127622.5 (DE), filed on Oct. 14, 2019.
Prior Publication US 2024/0119142 A1, Apr. 11, 2024
Int. Cl. G06F 21/60 (2013.01); G06F 21/55 (2013.01); G06F 21/64 (2013.01); G06N 5/04 (2023.01); G06V 10/82 (2022.01)
CPC G06F 21/64 (2013.01) [G06F 21/554 (2013.01); G06N 5/04 (2013.01); G06V 10/82 (2022.01); G06F 2221/033 (2013.01)] 12 Claims
OG exemplary drawing
 
1. A defense generator for dynamically generating at least one AI defense module, comprising:
at least one processor;
a tiling unit, implemented by the at least one processor, which is adapted to determine at least one tile for model data, wherein the model data is associated with an AI unit and the at least one tile indicates at least one subset of the model data;
an aggregation unit, implemented by the at least one processor, which is adapted to determine aggregated data, wherein the aggregated data assign at least one mathematical key figure to the at least one tile;
a distribution unit, implemented by the at least one processor, which is adapted to determine a distribution function for the aggregated data;
an inference unit, implemented by the at least one processor, which is adapted to determine at least one inference configuration using the distribution function;
a data transformation unit, implemented by the at least one processor, which is adapted to generate at least one AI defense module for the AI unit using the at least one inference configuration, wherein the at least one AI defense module is adapted, for an input data set of the AI unit, to:
determine whether an attack on the AI unit can be associated with the input data set; and/or
determine, using a data transformation, a second input data set with which no attack on the AI unit can be associated;
wherein the defense generator is adapted to receive a user-definable target definition,
wherein the inference unit is adapted to determine the inference configuration by taking into account the target definition, and
wherein the data transformation unit is adapted to select, by taking into account the inference configuration, whether or not: a determination of whether the input data set can be associated with an attack on the AI unit, and/or a determination, by using the data transformation, of a second input data set with which no attack on the AI unit can be associated, is carried out.