US 12,147,577 B2
Interpretability framework for differentially private deep learning
Daniel Bernau, Karlsruhe (DE); Philip-William Grassal, Walldorf (DE); Hannah Keller, Walldorf (DE); and Martin Haerterich, Wiesloch (DE)
Assigned to SAP SE, Walldorf (DE)
Filed by SAP SE, Walldorf (DE)
Filed on Feb. 19, 2024, as Appl. No. 18/581,254.
Application 18/581,254 is a division of application No. 17/086,244, filed on Oct. 30, 2020, granted, now 12,001,588.
Prior Publication US 2024/0211635 A1, Jun. 27, 2024
Int. Cl. G06F 21/62 (2013.01); G06F 17/18 (2006.01); G06F 18/214 (2023.01); G06N 20/00 (2019.01)
CPC G06F 21/6254 (2013.01) [G06F 17/18 (2013.01); G06F 18/2148 (2023.01); G06N 20/00 (2019.01)] 20 Claims
OG exemplary drawing
 
1. A computer-implemented method for anonymized analysis of datasets comprising:
receiving data specifying privacy parameters ε, δ which govern a differential privacy (DP) algorithm to be applied to a function to be evaluated over a dataset;
calculating, based on the received data, an expected membership advantage ρα that corresponds to a likelihood of an adversary successfully identifying a member in the dataset, the calculating being based on an overlap of two probability distributions; and
applying, using the calculated expected membership advantage ρα, the DP algorithm to a function over the dataset.