US 12,229,274 B2
Systems and methods for training set obfuscation utilizing an inverted threat model in a zero-trust computing environment
Mary Elizabeth Chalk, Austin, TX (US); Robert Derward Rogers, Oakland, CA (US); and Alan Donald Czeszynski, Pleasanton, CA (US)
Assigned to BeeKeeperAI, Inc., Austin, TX (US)
Filed by BeeKeeperAI, Inc., Austin, TX (US)
Filed on Feb. 16, 2023, as Appl. No. 18/110,767.
Application 18/110,767 is a continuation of application No. 18/110,308, filed on Feb. 15, 2023.
Prior Publication US 2024/0273233 A1, Aug. 15, 2024
Int. Cl. G06F 21/57 (2013.01); G06F 21/62 (2013.01); G06N 20/00 (2019.01)
CPC G06F 21/577 (2013.01) [G06F 21/6245 (2013.01); G06N 20/00 (2019.01); G06F 2221/033 (2013.01)] 18 Claims
OG exemplary drawing
 
1. A computerized method for model dissemination with data exfiltration prevention in a zero trust environment, the method comprising:
iteratively modifying a noise mixture in a data set used to train an algorithm responsive to an inversion model until performance of the inversion model is below a performance threshold;
characterizing the performance of the algorithm for the data set with the iteratively modified noise mixture;
determining if the performance of the algorithm is at or above a second threshold, then outputting weights for the algorithm;
determining if the performance of the algorithm is below the second threshold, then reverting the algorithm to being trained on the data set without any noise mixture and generating a deployment model; and
wherein the noise mixture is generated by:
interrogating the inversion model to generate inputs or combinations of inputs are most important to the decision making of the inversion model;
generating noise for the inversion model; and
weighting of the magnitude of the noise inversely to the correlation of each of the input or combination of inputs to the performance of the inversion model.