US 11,726,844 B2
Data sharing system and data sharing method therefor
Tianshi Chen, Pudong New Area (CN); Shuai Hu, Pudong New Area (CN); Yifan Hao, Pudong New Area (CN); and Yufeng Gao, Pudong New Area (CN)
Assigned to SHANGHAI CAMBRICON INFORMATION TECHNOLOGY CO., LTD, Pudong New Area (CN)
Filed by Shanghai Cambricon Information Technology Co., Ltd, Pudong New Area (CN)
Filed on Nov. 25, 2019, as Appl. No. 16/694,176.
Application 16/694,176 is a continuation of application No. 16/693,918, filed on Nov. 25, 2019, granted, now 10,901,815.
Application 16/693,918 is a continuation in part of application No. PCT/CN2018/092829, filed on Jun. 26, 2018.
Claims priority of application No. 201810641721.9 (CN), filed on Jun. 20, 2018.
Prior Publication US 2020/0118004 A1, Apr. 16, 2020
Int. Cl. G06F 9/54 (2006.01); G06N 3/063 (2023.01); G06F 9/22 (2006.01); G06F 12/0875 (2016.01); G06F 13/28 (2006.01); G06N 3/088 (2023.01); G06F 30/27 (2020.01); G06F 15/163 (2006.01); G06N 3/04 (2023.01); G06N 3/045 (2023.01); H04L 12/70 (2013.01)
CPC G06F 9/544 (2013.01) [G06F 9/223 (2013.01); G06F 12/0875 (2013.01); G06F 13/28 (2013.01); G06F 15/163 (2013.01); G06F 30/27 (2020.01); G06N 3/04 (2013.01); G06N 3/045 (2023.01); G06N 3/063 (2013.01); G06N 3/088 (2013.01); G06F 2212/452 (2013.01); H04L 2012/5686 (2013.01)] 16 Claims
OG exemplary drawing
 
1. A processing device for performing a generative adversarial network, comprising:
a memory configured to:
store a computation instruction,
receive input data that includes a random noise and reference data, and
store discriminator neural network parameters and generator neural network parameters;
a computation device configured to:
transmit the random noise input data into a generator neural network and perform operation to obtain a noise generation result, and
input the noise generation result and the reference data into a discriminator neural network to obtain a discrimination result, and
update the discriminator neural network parameters and the generator neural network parameters according to the discrimination result; and
a controller configured to decode the computation instruction into one or more operation instructions and send the one or more operation instructions to the computation device,
wherein the computation instruction includes one or more operation fields and an operation code, and the computation instruction includes at least one of:
a CONFIG instruction configured to configure each constant required by computation for a present layer before computation for each layer of the artificial neural network is started;
a COMPUTE instruction configured to complete arithmetic logical computation for each layer of the artificial neural network;
an IO instruction configured to implement reading-in of input data required by computation from an external address space and storage of the data back into an external space after computation is completed;
a No Operation (NOP) instruction responsible for clearing microinstructions in all microinstruction cache queues presently loaded inside and ensuring all instructions before the NOP instruction are completed, where the NOP instruction does not include any operation;
a JUMP instruction responsible for enabling the controller to jump an address of a next instruction to be read in the instruction cache unit to implement jump of a control flow; and
a MOVE instruction responsible for moving data of a certain address in an internal address space of the device to another address in the internal address space of the device, where this process is independent from a computation unit, and no resource of the computation unit is occupied in an execution process.