US 11,893,087 B2
Defending multimodal fusion models against single-source adversaries
Karren Yang, Medford, PA (US); Wan-Yi Lin, Wexford, PA (US); Manash Pratim, Pittsburgh, PA (US); Filipe J. Cabrita Condessa, Pittsburgh, PA (US); and Jeremy Kolter, Pittsburgh, PA (US)
Assigned to Robert Bosch GmbH
Filed by Robert Bosch GmbH, Stuttgart (DE)
Filed on Jun. 16, 2021, as Appl. No. 17/349,665.
Prior Publication US 2022/0405537 A1, Dec. 22, 2022
Int. Cl. G06K 9/00 (2022.01); G06F 18/25 (2023.01); G06N 3/08 (2023.01); G06T 7/246 (2017.01); G06V 20/56 (2022.01)
CPC G06F 18/256 (2023.01) [G06F 18/253 (2023.01); G06N 3/08 (2013.01); G06T 7/246 (2017.01); G06T 2207/20084 (2013.01); G06T 2207/30248 (2013.01); G06V 20/56 (2022.01)] 20 Claims
OG exemplary drawing
 
1. A multimodal perception system comprising:
a controller configured to,
receive a first signal from a first sensor, a second signal from a second sensor, and a third signal from a third sensor,
extract a first feature vector from the first signal,
extract a second feature vector from the second signal,
extract a third feature vector from the third signal,
determine an odd-one-out vector from the first, second, and third feature vectors via an odd-one-out network of a machine learning network, based on inconsistent modality prediction,
fuse, via a fusion network, the first, second, and third feature vectors and odd-one-out vector into a fused feature vector, and
output the fused feature vector.