US 12,147,519 B2
User authentication based on three-dimensional face modeling using partial face images
Anupama S, Chennai (IN); Chiranjib Choudhuri, Bangalore (IN); Avani Rao, Bangalore (IN); and Ajit Deepak Gupte, Bangalore (IN)
Assigned to QUALCOMM Incorporated, San Diego, CA (US)
Filed by QUALCOMM Incorporated, San Diego, CA (US)
Filed on Sep. 16, 2022, as Appl. No. 17/932,897.
Prior Publication US 2024/0104180 A1, Mar. 28, 2024
Int. Cl. G06F 21/32 (2013.01); G06V 10/44 (2022.01); G06V 10/74 (2022.01); G06V 10/82 (2022.01); G06V 20/64 (2022.01); G06V 40/16 (2022.01)
CPC G06F 21/32 (2013.01) [G06V 10/82 (2022.01); G06V 40/171 (2022.01)] 30 Claims
OG exemplary drawing
 
1. A method of authenticating a user, the method comprising:
obtaining a plurality of images associated with a face and a facial expression of the user, wherein each respective image of the plurality of images includes a different portion of the face;
generating, using an encoder neural network, one or more predicted three-dimensional (3D) facial modeling parameters, wherein the encoder neural network generates the one or more predicted 3D facial modeling parameters based on the plurality of images;
obtaining a reference 3D facial model associated with the face and an enrolled facial expression of the user, wherein the reference 3D facial model comprises a 3D mesh generated from a plurality of enrollment images of the face of the user with the enrolled facial expression;
determining an error between predicted expression coefficients included in the one or more predicted 3D facial modeling parameters and reference expression coefficients determined for the reference 3D facial model, wherein the predicted expression coefficients correspond to the facial expression and the reference expression coefficients correspond to the enrolled facial expression; and
authenticating the user based on the error being less than a pre-determined authentication threshold.