US 12,008,731 B2
Progressive data compression using artificial neural networks
Yadong Lu, Irvine, CA (US); Yang Yang, San Diego, CA (US); Yinhao Zhu, La Jolla, CA (US); Amir Said, San Diego, CA (US); and Taco Sebastiaan Cohen, Amsterdam (NL)
Assigned to QUALCOMM Incorporated, San Diego, CA (US)
Filed by QUALCOMM Incorporated, San Diego, CA (US)
Filed on Jan. 24, 2022, as Appl. No. 17/648,808.
Claims priority of provisional application 63/141,322, filed on Jan. 25, 2021.
Prior Publication US 2022/0237740 A1, Jul. 28, 2022
Int. Cl. G06T 3/4046 (2024.01); G06T 9/00 (2006.01)
CPC G06T 3/4046 (2013.01) [G06T 9/002 (2013.01)] 28 Claims
OG exemplary drawing
 
1. A method for compressing content using a neural network, comprising:
receiving content for compression;
encoding the content into a first latent code space through an encoder implemented by an artificial neural network;
generating a first compressed version of the encoded content using a first quantization bin size of a series of quantization bin sizes;
generating a refined compressed version of the encoded content by scaling the first compressed version of the encoded content into one or more second quantization bin sizes in the series of quantization bin sizes smaller than the first quantization bin size, conditioned at least on a value of the first compressed version of the encoded content; and
outputting the refined compressed version of the encoded content;
wherein generating the refined compressed version of the encoded content comprises:
generating a first refined compressed version of the encoded content by scaling the first compressed version of the encoded content into a first finer quantization bin size, conditioned on a value of the first compressed version of the encoded content; and
generating a second refined compressed version of the encoded content by scaling the first refined compressed version of the encoded content into a second finer quantization bin size conditioned on a value of the first refined compressed version of the encoded content and the first compressed version of the encoded content, wherein the second finer quantization bin size is smaller than the first finer quantization bin size.