US 12,001,588 B2
Interpretability framework for differentially private deep learning
Daniel Bernau, Karlsruhe (DE); Philip-William Grassal, Heidelberg (DE); Hannah Keller, Mannheim (DE); and Martin Haerterich, Wiesloch (DE)
Assigned to SAP SE, Walldorf (DE)
Filed by SAP SE, Walldorf (DE)
Filed on Oct. 30, 2020, as Appl. No. 17/086,244.
Prior Publication US 2022/0138348 A1, May 5, 2022
Int. Cl. G06F 21/62 (2013.01); G06F 17/18 (2006.01); G06F 18/214 (2023.01); G06N 20/00 (2019.01)
CPC G06F 21/6254 (2013.01) [G06F 17/18 (2013.01); G06F 18/2148 (2023.01); G06N 20/00 (2019.01)] 7 Claims
OG exemplary drawing
 
1. A computer-implemented method for anonymized analysis of datasets comprising:
receiving data specifying a bound for an adversarial posterior belief ρc that corresponds to a likelihood to re-identify data points from a dataset based on a differentially private function output;
calculating, based on the received data, privacy parameters ε, δ which govern a differential privacy (DP) algorithm to be applied to a function to be evaluated over a dataset, the calculating being based on a ratio of probabilities distributions of different observations which are bound by the posterior belief ρc as applied to the dataset; and
applying, using the calculated privacy parameters ε, δ, the DP algorithm to the function over the dataset.