US 11,677,948 B2
Image compression and decoding, video compression and decoding: methods and systems
Chri Besenbruch, London (GB); Ciro Cursio, London (GB); Christopher Finlay, London (GB); Vira Koshkina, London (GB); Alexander Lytchier, London (GB); Jan Xu, London (GB); and Arsalan Zafar, London (GB)
Assigned to DEEP RENDER LTD., London (GB)
Filed by DEEP RENDER LTD, London (GB)
Filed on May 10, 2022, as Appl. No. 17/740,716.
Application 17/740,716 is a continuation of application No. PCT/GB2021/051041, filed on Apr. 29, 2021.
Claims priority of provisional application 63/053,807, filed on Jul. 20, 2020.
Claims priority of provisional application 63/017,295, filed on Apr. 29, 2020.
Claims priority of application No. 2006275 (GB), filed on Apr. 29, 2020.
Prior Publication US 2022/0279183 A1, Sep. 1, 2022
Int. Cl. H04N 19/126 (2014.01); G06N 3/08 (2006.01); H04N 19/13 (2014.01); G06V 10/774 (2022.01); G06N 3/04 (2006.01); G06N 3/084 (2023.01)
CPC H04N 19/126 (2014.11) [G06N 3/0454 (2013.01); G06N 3/084 (2013.01); G06V 10/774 (2022.01); H04N 19/13 (2014.11)] 16 Claims
OG exemplary drawing
 
1. A computer implemented method of training a first neural network and a second neural network, the neural networks being for use in lossy image or video compression, transmission and decoding, the method including the steps of:
(i) receiving an input training image;
(ii) encoding the input training image using the first neural network, to produce a latent representation;
(iii) quantizing the latent representation to produce a quantized latent;
(iv) using the second neural network to produce an output image from the quantized latent, wherein the output image is an approximation of the input image;
(v) evaluating a loss function based on differences between the output image and the input training image;
(vi) evaluating a gradient of the loss function;
(vii) back-propagating the gradient of the loss function through the second neural network and through the first neural network, to update weights of the second neural network and of the first neural network; and
(viii) repeating steps (i) to (vii) using a set of training images, to produce a trained first neural network and a trained second neural network, and
(ix) storing the weights of the trained first neural network and of the trained second neural network;
wherein the loss function is a weighted sum of a rate term and a distortion term,
wherein split quantisation is used during the evaluation of the gradient of the loss function, with a combination of two quantisation proxies for the rate term and the distortion term.