US 12,353,324 B2
Sparse data storage method for deep learning, computer device and storage medium
Kuen Hung Tsoi, Shenzhen (CN); and Xinyu Niu, Shenzhen (CN)
Assigned to SHENZHEN CORERAIN TECHNOLOGIES CO., LTD., Shenzhen (CN)
Filed by Shenzhen Corerain Technologies Co., Ltd., Shenzhen (CN)
Filed on Jul. 31, 2023, as Appl. No. 18/362,178.
Claims priority of application No. 202211244929.X (CN), filed on Oct. 12, 2022.
Prior Publication US 2024/0126684 A1, Apr. 18, 2024
Int. Cl. G06F 12/02 (2006.01); G06F 12/0895 (2016.01); G06F 13/16 (2006.01)
CPC G06F 12/0238 (2013.01) [G06F 12/0895 (2013.01); G06F 13/1678 (2013.01)] 16 Claims
OG exemplary drawing
 
1. A sparse data storage method for deep learning, comprising:
obtaining an offset between current non-zero data and previous non-zero data of the current non-zero data, and generating to-be-transmitted data according to the current non-zero data and the offset, wherein the to-be-transmitted data is stored in a first memory, wherein the current non-zero data comprises computational data for computation in deep learning;
obtaining the to-be-transmitted data, calculating an address increment according to the offset, and obtaining, according to the address increment and the storage address of the previous non-zero data, a storage address in which the current non-zero data is to be stored in a second memory;
transmitting the current non-zero data to the second memory, and storing the current non-zero data in the storage address in the second memory;
generating a valid tag for tagging the storage address in the second memory, wherein the valid tag is generated by adding a tag to the storage address or recording the location of the storage address; and
reading the storage addresses of the non-zero data according to the valid tag and all the non-zero data within the scope covered by a convolution, selecting the current non-zero data for convolution computation, and obtaining zero within the scope covered by the convolution for convolution computation according to the valid tag.