US 11,704,391 B2
Machine learning model with watermarked weights
Deepak Kumar Poddar, Bangalore (IN); Mihir Mody, Bangalore (IN); Veeramanikandan Raju, Bangalore (IN); and Jason A. T. Jones, Richmond, TX (US)
Assigned to Texas Instruments Incorporated, Dallas, TX (US)
Filed by TEXAS INSTRUMENTS INCORPORATED, Dallas, TX (US)
Filed on Sep. 28, 2021, as Appl. No. 17/487,517.
Application 17/487,517 is a continuation of application No. 16/188,560, filed on Nov. 13, 2018, granted, now 11,163,861.
Claims priority of provisional application 62/612,274, filed on Dec. 29, 2017.
Prior Publication US 2022/0012312 A1, Jan. 13, 2022
This patent is subject to a terminal disclaimer.
Int. Cl. G06F 21/00 (2013.01); G06F 21/16 (2013.01); G06N 20/00 (2019.01); G06F 21/12 (2013.01); G06N 3/047 (2023.01)
CPC G06F 21/16 (2013.01) [G06F 21/121 (2013.01); G06N 3/047 (2023.01); G06N 20/00 (2019.01)] 22 Claims
OG exemplary drawing
 
1. A system comprising:
a processing unit;
a memory storing software instructions that, when executed by the processing unit, cause the processing unit to:
receive a machine learning model comprising a plurality of layers, respective ones of the layers comprising multiple weights;
determine an accuracy bias for each of multiple different sets of possible values for Np and Nb, wherein an Np of a respective layer is a number of partitions into which to group the weights in the respective layer, and an Nb of a respective partition is a number of least significant bits (LSBs) of the respective partition to be used for watermarking;
determine Np for each of the layers and Nb for each of the partitions in response to the determined accuracy biases;
insert one or more watermark bits into the Nb LSBs of the weights in each of the Np respective partitions in each of the respective layers; and
scramble one or more of the weight bits to produce watermarked and scrambled weights; and
an output device configured to provide the watermarked and scrambled weights to another device.