US 12,341,758 B2
Systems and methods for blind multimodal learning
Gharib Gharibi, Overland Park, MO (US); Greg Storm, Kansas City, MO (US); Ravi Patel, Kansas City, MO (US); and Riddhiman Das, Parkville, MO (US)
Assigned to Selfiie Corporation, Alameda, CA (US)
Filed by TripleBlind, Inc., Kansas City, MO (US)
Filed on Dec. 22, 2023, as Appl. No. 18/394,205.
Application 18/394,205 is a continuation of application No. 17/939,351, filed on Sep. 7, 2022, granted, now 11,855,970.
Application 17/939,351 is a continuation of application No. 17/180,475, filed on Feb. 19, 2021, granted, now 12,149,510.
Application 17/180,475 is a continuation in part of application No. 16/828,085, filed on Mar. 24, 2020, granted, now 11,582,203.
Application 17/180,475 is a continuation in part of application No. 16/828,216, filed on Mar. 24, 2020, granted, now 12,026,219.
Claims priority of provisional application 63/241,255, filed on Sep. 7, 2021.
Claims priority of provisional application 63/020,930, filed on May 6, 2020.
Claims priority of provisional application 62/948,105, filed on Dec. 13, 2019.
Prior Publication US 2024/0154942 A1, May 9, 2024
This patent is subject to a terminal disclaimer.
Int. Cl. H04L 9/40 (2022.01); G06F 16/13 (2019.01); G06F 17/16 (2006.01); G06F 18/2113 (2023.01); G06F 18/24 (2023.01); G06F 21/62 (2013.01); G06N 3/04 (2023.01); G06N 3/048 (2023.01); G06N 3/082 (2023.01); G06N 3/098 (2023.01); G06Q 20/40 (2012.01); G06Q 30/0601 (2023.01); H04L 9/00 (2022.01); H04L 9/06 (2006.01)
CPC H04L 63/0428 (2013.01) [G06F 16/13 (2019.01); G06F 17/16 (2013.01); G06F 18/2113 (2023.01); G06F 18/24 (2023.01); G06F 21/6245 (2013.01); G06N 3/04 (2013.01); G06N 3/048 (2023.01); G06N 3/082 (2013.01); G06N 3/098 (2023.01); G06Q 20/401 (2013.01); G06Q 30/0623 (2013.01); H04L 9/008 (2013.01); H04L 9/0625 (2013.01); G06Q 2220/00 (2013.01); H04L 2209/46 (2013.01)] 19 Claims
OG exemplary drawing
 
1. A method comprising:
creating, at a server device and based on assembled data from n client devices, a neural network having n bottom portions and a top portion, wherein the assembled data comprises different types of data;
transmitting, from the server device, each respective bottom portion of the n bottom portions to a respective client device of n client devices;
during a training iteration for training the neural network:
accepting, at the server device, a respective output from each respective bottom portion of the neural network to yield a plurality of respective outputs;
joining the plurality of respective outputs at a fusion layer on the server device to generate fused respective outputs; and
passing, from the server device, respective subsets of a set of gradients generated in a fusion layer from the server device to a respective client device of the n client devices, wherein each of the n client devices calculates a local set of gradients which is used to update local parameters associated with respective local models on the respective client device to yield a respective trained bottom portion of the neural network; and
after training, generating a combined model based on the respective trained bottom portion of the neural network from each respective client device.