US 11,748,633 B2
Distributed privacy-preserving computing
Rachael A. Callcut, San Francisco, CA (US); Michael Blum, San Francisco, CA (US); Joe Hesse, San Francisco, CA (US); Robert D. Rogers, Pleasanton, CA (US); Scott Hammond, Mill Valley, CA (US); and Mary Elizabeth Chalk, Austin, TX (US)
Assigned to The Regents of the University of California, Oakland, CA (US)
Filed by The Regents of the University of California, Oakland, CA (US)
Filed on Nov. 16, 2022, as Appl. No. 17/988,664.
Application 17/988,664 is a continuation of application No. 16/831,763, filed on Mar. 26, 2020, granted, now 11,531,904.
Claims priority of provisional application 62/824,183, filed on Mar. 26, 2019.
Claims priority of provisional application 62/948,556, filed on Dec. 16, 2019.
Prior Publication US 2023/0080780 A1, Mar. 16, 2023
Int. Cl. G06N 5/02 (2023.01); G06F 30/20 (2020.01); G06F 21/53 (2013.01); G06F 21/60 (2013.01); G06N 20/00 (2019.01); G06F 16/25 (2019.01); G06F 21/62 (2013.01)
CPC G06N 5/02 (2013.01) [G06F 16/256 (2019.01); G06F 21/53 (2013.01); G06F 21/602 (2013.01); G06F 30/20 (2020.01); G06N 20/00 (2019.01); G06F 21/6245 (2013.01)] 20 Claims
OG exemplary drawing
 
1. A method comprising:
identifying a plurality of instances of an algorithm, wherein each instance of the algorithm is integrated into one or more secure capsule computing frameworks, wherein the one or more secure capsule computing frameworks serve each instance of the algorithm to training data assets within one or more data storage structures of one or more data hosts in a secure manner that preserves privacy of the training data assets and each instance of the algorithm;
executing, by a data processing system, a federated training workflow on each instance of the algorithm, wherein the federated training workflow takes as input the training data assets, maps features of the training data assets to a target inference using parameters, computes a loss or error function, updates the parameters to learned parameters in order to minimize the loss or error function, and outputs one or more trained instances of the algorithm;
integrating, by the data processing system, the learned parameters for each trained instance of the algorithm into a fully federated algorithm, wherein the integrating comprises aggregating the learned parameters to obtain aggregated parameters and updating learned parameters of the fully federated algorithm with the aggregated parameters;
executing, by the data processing system, a testing workflow on the fully federated algorithm, wherein the testing workflow takes as input testing data, finds patterns in the testing data using the updated learned parameters, and outputs an inference;
calculating, by the data processing system, performance of the fully federated algorithm in providing the inference;
determining, by the data processing system, whether the performance of the fully federated algorithm satisfies an algorithm termination criteria;
when the performance of the fully federated algorithm does not satisfy the algorithm termination criteria, replacing, by the data processing system, each instance of the algorithm with the fully federated algorithm and re-executing the federated training workflow on each instance of the fully federated algorithm; and
when the performance of the fully federated algorithm does satisfy the algorithm termination criteria, providing, by the data processing system, the performance of the fully federated algorithm and the aggregated parameters to an algorithm developer of the algorithm.