US 12,340,810 B2
Server efficient enhancement of privacy in federated learning
Om Thakkar, San Jose, CA (US); Abhradeep Guha Thakurta, Santa Clara, CA (US); Peter Kairouz, Seattle, WA (US); Borja de Balle Pigem, London (GB); and Brendan McMahan, Seattle, WA (US)
Assigned to GOOGLE LLC, Mountain View, CA (US)
Appl. No. 18/007,656
Filed by GOOGLE LLC, Mountain View, CA (US)
PCT Filed Oct. 16, 2020, PCT No. PCT/US2020/055906
§ 371(c)(1), (2) Date Dec. 1, 2022,
PCT Pub. No. WO2021/247066, PCT Pub. Date Dec. 9, 2021.
Claims priority of provisional application 63/035,559, filed on Jun. 5, 2020.
Prior Publication US 2023/0223028 A1, Jul. 13, 2023
Int. Cl. G10L 15/22 (2006.01); G10L 15/06 (2013.01); G10L 15/30 (2013.01)
CPC G10L 15/30 (2013.01) [G10L 15/063 (2013.01)] 17 Claims
OG exemplary drawing
 
1. A method implemented by one or more processors, the method comprising:
selecting, at a remote system, a set of client devices, from a plurality of client devices;
determining, at the remote system, a reporting window indicating a time frame for the set of client devices to provide one or more gradients, to update a global model;
transmitting, by the remote system, to each client device in the set of client devices, the reporting window, wherein transmitting the reporting window causes each of the client devices to at least selectively determine a corresponding reporting time, within the reporting window, for transmitting a corresponding locally generated gradient to the remote system;
receiving, in the reporting window, the corresponding locally generated gradients at the corresponding reporting times, wherein each of the corresponding locally generated gradients is generated by a corresponding one of the client devices based on processing, using a local model stored locally at the client device, data generated locally at the client device to generate a predicted output of the local model;
updating one or more portions of the global model, based on the received gradients;
selecting, at the remote system, an additional set of additional client devices, from the plurality of client devices;
determining, at the remote system, an additional reporting window indicating an additional time frame for the additional set of additional client devices to provide one or more additional gradients, to update the global model;
transmitting, by the remote system, to each additional client device in the additional set of additional client devices, the additional reporting window, wherein transmitting the additional reporting window causes each of the additional client devices to at least selectively determine a corresponding additional reporting time, within the additional reporting window, for transmitting a corresponding additional locally generated gradient to the remote system;
receiving, in the additional reporting window, the corresponding additional locally generated gradients at the corresponding additional reporting times, wherein each of the corresponding additional locally generated gradients is generated by a corresponding one of the additional client devices based on processing, using a local model stored locally at the additional client device, additional data generated locally at the additional client device to generate an additional predicted output of the local model; and
updating one or more additional portions of the global model, based on the received additional gradients.