US 11,710,029 B2
Methods and apparatus to improve data training of a machine learning model using a field programmable gate array
Kooi Chi Ooi, Bukit Gambir (MY); Min Suet Lim, Gelugor (MY); Denica Larsen, Portland, OR (US); Lady Nataly Pinilla Pico, El Dorado Hills, CA (US); and Divya Vijayaraghavan, Los Altos, CA (US)
Assigned to INTEL CORPORATION, Santa Clara, CA (US)
Filed by Intel Corporation, Santa Clara, CA (US)
Filed on Sep. 28, 2018, as Appl. No. 16/147,037.
Prior Publication US 2019/0050715 A1, Feb. 14, 2019
Int. Cl. G06N 3/045 (2023.01); G06N 3/08 (2023.01); G06N 5/04 (2023.01); G06N 3/063 (2023.01); G06F 15/78 (2006.01); G06F 1/16 (2006.01); G06N 20/00 (2019.01); G06F 16/00 (2019.01); G06N 3/084 (2023.01); G06V 10/94 (2022.01); G06F 18/214 (2023.01); G06F 18/21 (2023.01); G06F 18/2413 (2023.01); G06N 3/048 (2023.01); G06V 10/764 (2022.01); G06V 10/774 (2022.01); G06V 10/776 (2022.01); G06V 10/82 (2022.01)
CPC G06N 3/063 (2013.01) [G06F 1/163 (2013.01); G06F 15/7892 (2013.01); G06F 16/00 (2019.01); G06F 18/214 (2023.01); G06F 18/217 (2023.01); G06F 18/24143 (2023.01); G06N 3/045 (2023.01); G06N 3/048 (2023.01); G06N 3/08 (2013.01); G06N 3/084 (2013.01); G06N 5/04 (2013.01); G06N 20/00 (2019.01); G06V 10/764 (2022.01); G06V 10/774 (2022.01); G06V 10/776 (2022.01); G06V 10/82 (2022.01); G06V 10/955 (2022.01)] 23 Claims
OG exemplary drawing
 
1. A system to improve data training of a neural network, the system comprising:
one or more processors, the one or more processors respectively associated with one or more corresponding users, the one or more processors to train first neural networks based on data associated with the corresponding users; and
a field-programmable gate array (FPGA) to:
configure a second neural network based on a first set of parameters from the one or more processors, the first set of parameters associated with the first neural networks;
execute the second neural network to determine a difference between a first output associated with a first one of the first neural networks and a second output of the second neural network;
generate a second set of parameters associated with the first one of the first neural networks after a determination to update the first one of the first neural networks based on the difference between the first output and the second output; and
cause transmission of the second set of parameters to at least the first one of the first neural networks to cause an update to at least the first one of the first neural networks.