US 12,321,484 B2
Hybrid human-machine differential privacy
Omer Dror, Sunnyvale, CA (US); and Ofir Farchy, Modiin (IL)
Assigned to LYNX MD LTD., Tel Aviv (IL)
Filed by LYNX MD LTD, Tel Aviv (IL)
Filed on Feb. 19, 2022, as Appl. No. 17/676,140.
Claims priority of provisional application 63/151,748, filed on Feb. 21, 2021.
Claims priority of provisional application 63/151,751, filed on Feb. 21, 2021.
Claims priority of provisional application 63/151,745, filed on Feb. 21, 2021.
Prior Publication US 2022/0171878 A1, Jun. 2, 2022
Int. Cl. G06F 21/62 (2013.01); G16H 10/60 (2018.01); G16H 40/20 (2018.01); G16H 40/63 (2018.01); G16H 40/67 (2018.01)
CPC G06F 21/6245 (2013.01) [G06F 21/6209 (2013.01); G06F 21/6254 (2013.01); G06F 21/6272 (2013.01); G16H 10/60 (2018.01); G16H 40/20 (2018.01); G16H 40/63 (2018.01); G16H 40/67 (2018.01)] 19 Claims
OG exemplary drawing
 
1. A non-transitory computer readable medium storing a software program comprising data and computer implementable instructions that when executed by at least one processor cause the at least one processor to perform a method for hybrid human-machine differential privacy, the method comprising:
receiving a first query, a second query and a third query associated with medical data from computing devices;
accessing the medical data stored on memory to determine a possible response to the first query, a possible response to the second query, and a possible response to the third query;
using a trained machine learning model to determine a first privacy loss level associated with the possible response to the first query, a second privacy loss level associated with the possible response to the second query, and a third privacy loss level associated with the possible response to the third query, wherein the first privacy loss level and the second privacy loss level are identical, wherein the trained machine learning model includes an interference model, a regression model, a clustering model, a classification algorithm, an image segmentation model, or an object detector;
using an output of the trained machine learning model to determine a first confidence level for the determination of the first privacy loss level, and a second confidence level for the determination of the second privacy loss level, wherein the second confidence level is lower than the first confidence level;
in response to the first privacy loss level and the first confidence level, providing the possible response to the first query;
in response to the third privacy loss level, avoiding providing the possible response to the third query; and
in response to the second privacy loss level and the second confidence level:
providing to a user information indicative of at least one aspect of the possible response to the second query,
receiving an input from the user, and
determining whether to provide or to avoid providing the possible response to the second query based on the input received from the user.