US 12,461,991 B2
Distributed neural network training system
Yichun Shen, Shanghai (CN); Siyi Li, Shanghai (CN); Yuhong Wen, McLean, VA (US); and Clement Farabet, Saratoga, CA (US)
Assigned to NVIDIA Corporation, Santa Clara, CA (US)
Filed by NVIDIA Corporation, Santa Clara, CA (US)
Filed on Dec. 22, 2020, as Appl. No. 17/130,966.
Application 17/130,966 is a continuation of application No. PCT/CN2020/133656, filed on Dec. 3, 2020.
Prior Publication US 2022/0180125 A1, Jun. 9, 2022
Int. Cl. G06K 9/00 (2022.01); G06F 9/48 (2006.01); G06F 18/214 (2023.01); G06N 3/045 (2023.01); G06N 3/08 (2023.01); G06V 10/94 (2022.01); G16H 30/40 (2018.01)
CPC G06F 18/2148 (2023.01) [G06F 9/4881 (2013.01); G06N 3/045 (2023.01); G06N 3/08 (2013.01); G06V 10/955 (2022.01); G16H 30/40 (2018.01); G06V 2201/03 (2022.01)] 14 Claims
OG exemplary drawing
 
1. One or more processors, comprising:
circuitry to cause a first set of weights of a neural network to be updated based at least on one or more second sets of weights for the neural network generated by one or more instances of the neural network executing in one or more distributed clients, wherein the first set of weights of the neural network is updated using individual sets of the one or more second sets of weights via individual clients, of the one or more distributed clients, having obtained an exclusive right to update the first set of weights for a given period.