US 11,755,743 B2
Protecting machine learning models from privacy attacks
Amit Sharma, Bengaluru (IN); Aditya Vithal Nori, Cambridge (GB); and Shruti Shrikant Tople, Cambridge (GB)
Assigned to MICROSOFT TECHNOLOGY LICENSING, LLC, Redmond, WA (US)
Filed by Microsoft Technology Licensing, LLC, Redmond, WA (US)
Filed on Sep. 3, 2019, as Appl. No. 16/559,444.
Prior Publication US 2021/0064760 A1, Mar. 4, 2021
Int. Cl. G06F 21/57 (2013.01); G06N 20/00 (2019.01); G06F 21/55 (2013.01); G06F 21/62 (2013.01); G06N 5/04 (2023.01)
CPC G06F 21/577 (2013.01) [G06F 21/55 (2013.01); G06F 21/6245 (2013.01); G06N 5/04 (2013.01); G06N 20/00 (2019.01); G06F 2221/031 (2013.01); G06F 2221/033 (2013.01)] 19 Claims
OG exemplary drawing
 
1. A method for protecting against privacy attacks on machine learning models, comprising:
training a machine learning model using a set of training data and causal relationship data, wherein the causal relationship data identifies, in the set of training data, a subset of features that have a causal relationship with an outcome, and wherein the machine learning model includes a function from the subset of features to the outcome;
receiving a predefined privacy guarantee value;
adding an amount of noise to the machine learning model such that the machine learning model has a privacy guarantee value equivalent to or stronger than the predefined privacy guarantee value, wherein an amount of noise added to the machine learning model is based on a level of protection against privacy attacks and the amount of noise is added to a level of the machine learning model based on a desired accuracy level of the machine learning model; and
provide an output by the machine learning model based on the causal relationships between the subset of features and the outcome.