US 12,106,214 B2
Sensor transformation attention network (STAN) model
Stefan Braun, Saint-Louis (FR); Daniel Neil, Zurich (CH); Enea Ceolini, Zurich (CH); Jithendar Anumula, Zurich (CH); and Shih-Chii Liu, Zurich (CH)
Assigned to SAMSUNG ELECTRONICS CO., LTD., Suwon-si (KR); and UNIVERSITAET ZUERICH, Zurich (CH)
Filed by SAMSUNG ELECTRONICS CO., LTD., Suwon-si (KR); and UNIVERSITAET ZUERICH, Zurich (CH)
Filed on Oct. 18, 2022, as Appl. No. 17/968,085.
Application 17/968,085 is a continuation of application No. 15/911,969, filed on Mar. 5, 2018, granted, now 11,501,154.
Claims priority of provisional application 62/508,631, filed on May 19, 2017.
Claims priority of provisional application 62/507,385, filed on May 17, 2017.
Claims priority of application No. 10-2017-0117021 (KR), filed on Sep. 13, 2017.
Prior Publication US 2023/0045790 A1, Feb. 16, 2023
This patent is subject to a terminal disclaimer.
Int. Cl. G06N 3/08 (2023.01); G06F 18/2413 (2023.01); G06F 18/25 (2023.01); G06N 3/04 (2023.01); G06N 3/0442 (2023.01); G06N 3/0455 (2023.01); G06N 3/0464 (2023.01); G06N 3/084 (2023.01); G06V 10/44 (2022.01); G06V 10/46 (2022.01); G06V 10/764 (2022.01); G06V 10/80 (2022.01); G06V 10/82 (2022.01); G06V 20/10 (2022.01); G10L 15/16 (2006.01); G10L 15/20 (2006.01); G10L 15/24 (2013.01)
CPC G06N 3/08 (2013.01) [G06F 18/2413 (2023.01); G06F 18/256 (2023.01); G06N 3/04 (2013.01); G06N 3/0442 (2023.01); G06N 3/0455 (2023.01); G06N 3/0464 (2023.01); G06N 3/084 (2013.01); G06V 10/454 (2022.01); G06V 10/462 (2022.01); G06V 10/764 (2022.01); G06V 10/806 (2022.01); G06V 10/811 (2022.01); G06V 20/10 (2022.01); G10L 15/16 (2013.01); G06V 10/82 (2022.01); G10L 15/20 (2013.01); G10L 15/24 (2013.01)] 20 Claims
OG exemplary drawing
 
1. A sensor transformation attention network (STAN) model, comprising:
a plurality of sensors configured to collect input signals; and
one or more processor configured to implement:
a plurality of attention circuits configured to calculate attention scores respectively corresponding to feature vectors respectively corresponding to the input signals;
a merge circuit configured to calculate attention values of the attention scores, respectively, and generate a merged transformation vector based on the attention values and the feature vectors; and
a task-specific circuit configured to classify the merged transformation vector,
wherein the merge circuit is further configured to generate the merged transformation vector by scaling the feature vectors based on the corresponding attention values, and by merging the scaled feature vectors using an adding operation.