US 12,288,376 B2
Systems and methods for defending against physical attacks on image classification
Yevgeniy Vorobeychik, St. Louis, MO (US); Tong Wu, St. Louis, MO (US); and Liang Tong, St. Louis, MO (US)
Assigned to Washington University, St. Louis, MO (US)
Filed by Yevgeniy Vorobeychik, St. Louis, MO (US); Tong Wu, St. Louis, MO (US); and Liang Tong, St. Louis, MO (US)
Filed on Mar. 26, 2021, as Appl. No. 17/214,071.
Claims priority of provisional application 63/000,930, filed on Mar. 27, 2020.
Prior Publication US 2021/0300433 A1, Sep. 30, 2021
Int. Cl. G06V 10/75 (2022.01); B60W 60/00 (2020.01); G06F 18/214 (2023.01); G06F 18/24 (2023.01); G06N 3/08 (2023.01); G06V 10/764 (2022.01); G06V 10/77 (2022.01); G06V 10/774 (2022.01); G06V 10/82 (2022.01)
CPC G06V 10/751 (2022.01) [B60W 60/00188 (2020.02); G06F 18/214 (2023.01); G06F 18/24 (2023.01); G06N 3/08 (2013.01); G06V 10/764 (2022.01); G06V 10/7715 (2022.01); G06V 10/774 (2022.01); G06V 10/82 (2022.01); B60W 2420/403 (2013.01)] 18 Claims
OG exemplary drawing
 
1. An image classification (IC) computing system for defending against physically realizable attacks, the IC computing system comprising at least one processor in communication with at least one memory device, wherein the at least one processor is programmed to:
retrieve, from the at least one memory device, a training dataset of one or more input images, each input image including a real-world object to be identified;
generate at least one adversarial image from a selected image from the training dataset of one or more input images by:
selecting a location on the selected image; and
generating adversarial noise inside a predetermined shape positioned at the selected location using projected gradient descent (PGD), wherein the predetermined shape with the generated adversarial noise occludes a portion of the real-world object to be identified;
train a classify images classifier to by identifying the real-world object in images, wherein the classifier is trained using the training dataset and the generated at least one adversarial image; and
store the trained classifier in the at least one memory device.