US 11,941,721 B2
Using watermark information and weight information to train an embedded neural network model
Tianhao Wang, Hangzhou (CN); Yongliang Liu, Hangzhou (CN); and Qi Zhang, Hangzhou (CN)
Assigned to Alibaba Group Holding Limited
Filed by Alibaba Group Holding Limited, Grand Cayman (KY)
Filed on Apr. 15, 2022, as Appl. No. 17/722,159.
Application 17/722,159 is a continuation of application No. PCT/CN2020/123888, filed on Oct. 27, 2020.
Claims priority of application No. 201911036839.X (CN), filed on Oct. 29, 2019.
Prior Publication US 2022/0237729 A1, Jul. 28, 2022
Int. Cl. G06T 1/00 (2006.01); G06N 3/08 (2023.01); G06T 5/50 (2006.01)
CPC G06T 1/0028 (2013.01) [G06N 3/08 (2013.01); G06T 1/005 (2013.01); G06T 5/50 (2013.01); G06T 2201/0065 (2013.01); G06T 2207/20084 (2013.01); G06T 2207/20221 (2013.01)] 18 Claims
OG exemplary drawing
 
1. A method implemented by a computing device, the method comprising:
obtaining weight information of a target neural network model;
obtaining target watermark information; and
using the target watermark information and the weight information of the target neural network model to train an embedded neural network model, and updating the weight information of the target neural network model according to target watermark embedded data provided by the embedded neural network model to obtain the target neural network model embedded with the target watermark information, wherein using the target watermark information and the weight information of the target neural network model to train the embedded neural network model comprises:
obtaining random noise information;
using the weight information of the target neural network model as an instance of a first watermark training set, and using the target watermark information as a label of the first watermark training set;
using weight information of a reference neural network model as an instance of a second watermark training set, and using the random noise information as a label of the second watermark training set, the reference neural network model being a neural network model without the target watermark information embedded; and
using the first watermark training set and the second watermark training set as a new training set, and updating the weight information of the embedded neural network model according to a model loss function of the embedded neural network model until the model loss function converges.