| CPC G06T 15/506 (2013.01) [G06T 15/80 (2013.01); G06V 10/774 (2022.01); G06V 10/82 (2022.01); G06T 2200/04 (2013.01); G06T 2200/08 (2013.01)] | 11 Claims |

|
1. A method comprising:
generating, using a camera of a mobile device, an image;
accessing a virtual object corresponding to an object in the image;
identifying shading parameters of the virtual object based on the object captured in the image and a machine learning model that is pre-trained with a paired dataset, the paired dataset comprising synthetic source data and synthetic target data, the synthetic source data comprising environment maps and three-dimensional (3D) scans of objects depicted in the environment maps, the synthetic target data comprising a synthetic sphere image rendered in a same environment map, wherein the environment maps include a set of HDR (High Dynamic Range) environment maps, wherein the 3D scans of objects include a set of 3D facial scans of people depicted in a corresponding HDR environment map of the set of HDR environment maps;
training the machine learning model by:
generating, using a first renderer, a synthetic face image based on the set of HDR (High Dynamic Range) environment maps and the set of 3D facial scans of people;
generating, using a neural network, predicted lighting parameters based on the synthetic face image;
generating, using a differential renderer, a predicted sphere image based on the predicted lighting parameters and a sphere asset that comprises synthetic sphere 3D models;
generating, using a second renderer, the synthetic sphere image based on the set of HDR environment maps and the sphere asset;
comparing the predicted sphere image with the synthetic sphere image using a L2 loss function; and
training the neural network using a result of the L2 loss function via back-propagation;
applying the shading parameters to the virtual object to generate a shaded virtual object; and
displaying, in a display of the mobile device, the shaded virtual object as a layer to the image.
|