US 12,315,090 B1
Augmented reality virtual makeup try-on
Rahul Suresh, Vancouver (CA); Amin Banitalebi Dehkordi, Vancouver (CA); Sabiha Mahek Ahmed, Vancouver (CA); Yury Lizunov, Bothell, WA (US); Radhika Deodhar, Burnaby (CA); and Siliang Liu, Vancouver (CA)
Assigned to AMAZON TECHNOLOGIES, INC., Seattle, WA (US)
Filed by Amazon Technologies, Inc., Seattle, WA (US)
Filed on Mar. 23, 2023, as Appl. No. 18/125,301.
Int. Cl. G06T 19/00 (2011.01); G06T 3/18 (2024.01); G06V 10/25 (2022.01); G06V 10/56 (2022.01); G06V 10/82 (2022.01); G06V 40/16 (2022.01)
CPC G06T 19/006 (2013.01) [G06T 3/18 (2024.01); G06V 10/25 (2022.01); G06V 10/56 (2022.01); G06V 10/82 (2022.01); G06V 40/171 (2022.01); G06V 2201/07 (2022.01)] 20 Claims
OG exemplary drawing
 
1. A method comprising:
receiving a first input image representing a human face;
generating a three-dimensional (3D) mesh representing the human face using depth information received from a depth sensor;
determining estimated illumination characteristics of the 3D mesh using spherical harmonics;
receiving a selection of a virtual makeup asset, the virtual makeup asset comprising a first RGB color value, a finish value, and a sparkle type;
generating a 3D model of the virtual makeup asset warped to conform to the 3D mesh representing the human face;
rendering the 3D model using physical-based rendering (PBR), the estimated illumination characteristics, the first RGB color value, the finish value, and the sparkle value;
detecting a plurality of two-dimensional (2D) facial landmarks using a convolutional neural network and the first input image;
generating a warped representation of the rendered 3D model using the plurality of 2D facial landmarks;
determining a position at which to render the warped representation on a second input image representing the human face using the plurality of 2D facial landmarks; and
rendering a two-dimensional view of the warped representation of the rendered 3D model at the position on the second input image representing the human face.