CPC H04L 63/0428 (2013.01) [G06F 16/13 (2019.01); G06F 17/16 (2013.01); G06F 18/2113 (2023.01); G06F 18/24 (2023.01); G06F 21/6245 (2013.01); G06N 3/04 (2013.01); G06N 3/048 (2023.01); G06N 3/082 (2013.01); G06N 3/098 (2023.01); G06Q 20/401 (2013.01); G06Q 30/0623 (2013.01); H04L 9/008 (2013.01); H04L 9/0625 (2013.01); G06Q 2220/00 (2013.01); H04L 2209/46 (2013.01)] | 19 Claims |
1. A method comprising:
creating, at a server device and based on assembled data from n client devices, a neural network having n bottom portions and a top portion, wherein the assembled data comprises different types of data;
transmitting, from the server device, each respective bottom portion of the n bottom portions to a respective client device of n client devices;
during a training iteration for training the neural network:
accepting, at the server device, a respective output from each respective bottom portion of the neural network to yield a plurality of respective outputs;
joining the plurality of respective outputs at a fusion layer on the server device to generate fused respective outputs; and
passing, from the server device, respective subsets of a set of gradients generated in a fusion layer from the server device to a respective client device of the n client devices, wherein each of the n client devices calculates a local set of gradients which is used to update local parameters associated with respective local models on the respective client device to yield a respective trained bottom portion of the neural network; and
after training, generating a combined model based on the respective trained bottom portion of the neural network from each respective client device.
|