US 12,282,869 B2
On-device machine learning platform
Pannag Sanketi, Fremont, CA (US); Wolfgang Grieskamp, Sammamish, WA (US); Daniel Ramage, Seattle, WA (US); and Hrishikesh Aradhye, Mountain View, CA (US)
Assigned to GOOGLE LLC, Mountain View, CA (US)
Filed by Google LLC, Mountain View, CA (US)
Filed on Jul. 27, 2022, as Appl. No. 17/874,967.
Application 17/874,967 is a continuation of application No. 15/674,885, filed on Aug. 11, 2017, granted, now 11,403,540.
Prior Publication US 2022/0358385 A1, Nov. 10, 2022
Int. Cl. G06N 5/048 (2023.01); G06F 21/62 (2013.01); G06N 20/00 (2019.01)
CPC G06N 5/048 (2013.01) [G06N 20/00 (2019.01); G06F 21/62 (2013.01); G06F 21/629 (2013.01)] 35 Claims
OG exemplary drawing
 
1. A computing system for implementing an on-device machine learning platform, comprising:
one or more processors; and
one or more non-transitory computer-readable media that store instructions that are executable to cause the computing system to perform operations, the operations comprising:
determining, using a context provider that performs client permission control, a mapping that indicates a respective permission status of a client relative to respective context data, wherein the mapping comprises a first permission status of the client relative to first context data, wherein the first permission status indicates that the client has permission to obtain inferences from the on-device machine-learning platform that are based on the first context data;
receiving, from a client via an application programming interface (API), an API call that requests for an inference to be generated using a machine-learned model executed by the on-device machine learning platform on the basis of input data received from the client and according to one or more configuration options specified by the client, wherein a configuration option identifies the first context data to be used to generate the inference;
determining, based on the mapping, that the client has permission to obtain inferences from the on-device machine-learning platform that are based on the first context data;
obtaining the first context data, wherein the first context data is not provided to the client;
based on determining that the client has access to the first context data, generating, using the machine-learned model, at least one inference based on the input data and the first context data; and
providing, using the API, the at least one inference to the client.