US 12,346,432 B2
Securing systems employing artificial intelligence
Oleg Pogorelik, Lapid (IL); Alex Nayshtut, Gan Yavne (IL); Omer Ben-Shalom, Rishon le-Tzion (IL); Denis Klimov, Beersheba (IL); Raizy Kellermann, Jerusalem (IL); Guy Barnhart-Magen, Herzliya (IL); and Vadim Sukhomlinov, Santa Clara, CA (US)
Assigned to Intel Corporation, Santa Clara, CA (US)
Appl. No. 17/254,235
Filed by INTEL CORPORATION, Santa Clara, CA (US)
PCT Filed Apr. 23, 2019, PCT No. PCT/US2019/028687
§ 371(c)(1), (2) Date Dec. 18, 2020,
PCT Pub. No. WO2020/142110, PCT Pub. Date Jul. 9, 2020.
Claims priority of provisional application 62/786,941, filed on Dec. 31, 2018.
Prior Publication US 2021/0319098 A1, Oct. 14, 2021
Int. Cl. G06F 21/55 (2013.01); G06N 3/04 (2023.01); G06N 3/045 (2023.01); G06N 3/084 (2023.01); G06N 3/094 (2023.01); G06N 5/04 (2023.01); G06N 20/00 (2019.01); G06N 7/01 (2023.01)
CPC G06F 21/554 (2013.01) [G06N 3/04 (2013.01); G06N 3/045 (2023.01); G06N 3/084 (2013.01); G06N 3/094 (2023.01); G06N 5/04 (2013.01); G06N 20/00 (2019.01); G06F 2221/034 (2013.01); G06N 7/01 (2023.01)] 17 Claims
OG exemplary drawing
 
1. An apparatus, comprising:
circuitry; and
memory coupled to the circuitry, the memory storing instructions, which when executed by the circuitry cause the circuitry to:
receive input data from an input device;
generate output data based in part on executing an inference model with the input data, the output data comprising an indication of a visible class of a plurality of visible classes or an indication of a hidden class of a plurality of hidden classes, wherein the visible class is a type of category the inference model is trained to classify when the input data is expected input data and the hidden class is a type of category the inference model is trained to classify when the input data is adversarial input data designed to cause the inference model to misclassify the adversarial input data into the visible class;
determine whether the output data comprises an indication of the hidden class from the plurality of hidden classes, wherein at least one of the plurality of hidden classes corresponds to blacklisted inputs;
provide the generated output to an output consumer based on a determination that the output data does not comprise an indication of the hidden class; and
provide obfuscated output to the output consumer based on a determination that the output data does comprise an indication of the hidden class, the obfuscated output comprising an indication of a visible class associated with the hidden class instead of the hidden class.