US 11,941,753 B2
Face pose estimation/three-dimensional face reconstruction method, apparatus, and electronic device
Shiwei Zhou, Hangzhou (CN)
Assigned to ALIBABA GROUP HOLDING LIMITED, George Town (KY)
Filed by ALIBABA GROUP HOLDING LIMITED, Grand Cayman Islands (KY)
Filed on Feb. 26, 2021, as Appl. No. 17/186,593.
Application 17/186,593 is a continuation of application No. PCT/CN2019/101715, filed on Aug. 21, 2019.
Claims priority of application No. 201810983040.0 (CN), filed on Aug. 27, 2018.
Prior Publication US 2021/0183141 A1, Jun. 17, 2021
Int. Cl. G06T 17/00 (2006.01); G06T 3/00 (2006.01); G06T 7/73 (2017.01); G06V 40/16 (2022.01)
CPC G06T 17/00 (2013.01) [G06T 3/0031 (2013.01); G06T 7/73 (2017.01); G06V 40/174 (2022.01); G06T 2200/08 (2013.01); G06T 2207/30201 (2013.01)] 13 Claims
OG exemplary drawing
 
1. A face pose estimation method, comprising:
acquiring a two-dimensional face image;
constructing a three-dimensional face model corresponding to the two-dimensional face image, wherein the constructing of the three-dimensional face model comprises:
obtaining a plurality of three-dimensional face model samples with a neutral facial expression;
applying a dimensionality reduction algorithm to the plurality of three-dimensional face model samples with the neutral facial expression to determine a three-dimensional average face model;
for each of the plurality of three-dimensional face model samples with the neutral facial expression, generating multiple face model samples corresponding to multiple non-neutral facial expressions, thereby obtaining a plurality of three-dimensional face model samples for each facial expression;
for each facial expression, determining a facial expression base by (1) obtaining an average facial expression model for the facial expression based on the plurality of three-dimensional face model samples for the facial expression; (2) subtracting the average facial expression model for the facial expression from each of the plurality of three-dimensional face model samples to obtain a difference data vector, (3) determining the difference data vector as the facial expression base for the facial expression;
determining a projection mapping matrix from the three-dimensional average face model to the two-dimensional face image based on internal face feature points of the two-dimensional face image and the three-dimensional average face model, wherein the internal face feature points include one or more of eyes, nose tip, mouth corner points, or eyebrows;
constructing a first three-dimensional face model corresponding to the two-dimensional face image based on the projection mapping matrix and feature vectors of a three-dimensional feature face space;
performing contour feature point fitting on the first three-dimensional face model based on face contour feature points of the two-dimensional face image;
after the contour feature point fitting, performing facial expression fitting on the first three-dimensional face model based on the internal face feature points of the two-dimensional face image and at least one of the facial expression bases;
determining an error between the two-dimensional face image and the three-dimensional face model; and
in response to the error being less than a preset error, adopting the first three-dimensional face model.