| CPC H03M 13/27 (2013.01) [H03M 13/6577 (2013.01); H04L 1/0041 (2013.01); H04L 1/0045 (2013.01); H04L 1/0071 (2013.01)] | 14 Claims |

|
8. A data processing method for a deep neural network model, comprising:
reading a plurality of weights from a transmission data;
quantizing each of the weights into a plurality of bits, wherein the bits sequentially comprise a first-type bit, a plurality of second-type bits, a third-type bit, and a plurality of fourth-type bits;
interleaving the first-type bit in each of the weights into a first bit set;
sequentially interleaving each of the second-type bits in each of the weights into a plurality of second bit sets, and reading a second compression rate of each of the second bit sets in response to the second bit sets being compressible;
interleaving the third-type bit in each of the weights into a third bit set, and reading a third compression rate of the third bit set in response to the third bit set being compressible;
compressing each of the second bit sets with the second compression rate, and compressing the third bit set with the third compression rate;
sequentially coding the first bit set, each of the compressed second bit sets, and the compressed third bit set to generate a first encoded data corresponding to the transmission data; and
transmitting the first encoded data to an external device.
|