US 11,943,460 B2
Variable bit rate compression using neural network models
Yadong Lu, Irvine, CA (US); Yang Yang, San Diego, CA (US); Yinhao Zhu, La Jolla, CA (US); Amir Said, San Diego, CA (US); Reza Pourreza, San Diego, CA (US); and Taco Sebastiaan Cohen, Amsterdam (NL)
Assigned to QUALCOMM INCORPORATED, San Diego, CA (US)
Filed by QUALCOMM Incorporated, San Diego, CA (US)
Filed on Jan. 11, 2022, as Appl. No. 17/573,568.
Claims priority of provisional application 63/136,607, filed on Jan. 12, 2021.
Prior Publication US 2022/0224926 A1, Jul. 14, 2022
Int. Cl. H04N 19/42 (2014.01); H04N 19/124 (2014.01); H04N 19/13 (2014.01); H04N 19/136 (2014.01); H04N 19/30 (2014.01); H04N 19/36 (2014.01)
CPC H04N 19/42 (2014.11) [H04N 19/124 (2014.11); H04N 19/13 (2014.11); H04N 19/136 (2014.11); H04N 19/30 (2014.11)] 18 Claims
OG exemplary drawing
 
1. A computer-implemented method for operating an artificial neural network (ANN), comprising:
receiving an input by the ANN;
generating, via the ANN, a latent representation of the input, the latent representation including a plurality of latents;
applying a respective learned latent scaling parameter to each latent to generate a plurality of respective scaled latents, where each respective learned latent scaling parameter is learned as a function of a respective channel among a plurality of channels and a tradeoff parameter;
determining that at least one scaled latent associated with at least one channel among the plurality of channels is below a predefined threshold;
determining not to transmit the at least one channel based on the determination that the at least one scaled latent is below the predefined threshold; and
transmitting at least one second channel among the plurality of channels.