US 12,412,085 B2
Data and compute efficient equivariant convolutional networks
Mirgahney Husham Awadelkareem Mohamed, Amsterdam (NL); Gabriele Cesa, Diemen (NL); Taco Sebastiaan Cohen, Amsterdam (NL); and Max Welling, Bussum (NL)
Assigned to QUALCOMM Incorporated, San Diego, CA (US)
Filed by QUALCOMM Incorporated, San Diego, CA (US)
Filed on Feb. 8, 2021, as Appl. No. 17/170,745.
Claims priority of provisional application 62/971,047, filed on Feb. 6, 2020.
Prior Publication US 2021/0248467 A1, Aug. 12, 2021
Int. Cl. G06N 3/08 (2023.01); G06F 9/345 (2018.01); G06F 18/213 (2023.01); G06F 18/214 (2023.01); G06V 10/44 (2022.01); G06V 10/764 (2022.01); G06V 10/82 (2022.01)
CPC G06N 3/08 (2013.01) [G06F 9/3455 (2013.01); G06F 18/213 (2023.01); G06F 18/214 (2023.01); G06V 10/454 (2022.01); G06V 10/764 (2022.01); G06V 10/82 (2022.01)] 28 Claims
OG exemplary drawing
 
1. A method of machine learning, comprising:
converting an architecture of a neural network model from an equivariant architecture to a traditional convolutional architecture, wherein the traditional convolutional architecture is executable on an edge device; and
performing, at the edge device, an inference with the neural network model in the traditional convolutional architecture, wherein:
the neural network model was trained using a total loss function including a task loss component and a weighted equivariance loss component as a regularization loss component that allows the neural network model to enforce symmetries using the traditional convolutional architecture executable, and
the weighted equivariance loss component is masked based on a mask in one or more layers of the neural network model such that features in locations of an input rotated outside of a defined area are disregarded in calculating the weighted equivariance loss component.