CPC G06T 17/00 (2013.01) [G06T 3/0031 (2013.01); G06T 7/73 (2017.01); G06V 40/174 (2022.01); G06T 2200/08 (2013.01); G06T 2207/30201 (2013.01)] | 13 Claims |
1. A face pose estimation method, comprising:
acquiring a two-dimensional face image;
constructing a three-dimensional face model corresponding to the two-dimensional face image, wherein the constructing of the three-dimensional face model comprises:
obtaining a plurality of three-dimensional face model samples with a neutral facial expression;
applying a dimensionality reduction algorithm to the plurality of three-dimensional face model samples with the neutral facial expression to determine a three-dimensional average face model;
for each of the plurality of three-dimensional face model samples with the neutral facial expression, generating multiple face model samples corresponding to multiple non-neutral facial expressions, thereby obtaining a plurality of three-dimensional face model samples for each facial expression;
for each facial expression, determining a facial expression base by (1) obtaining an average facial expression model for the facial expression based on the plurality of three-dimensional face model samples for the facial expression; (2) subtracting the average facial expression model for the facial expression from each of the plurality of three-dimensional face model samples to obtain a difference data vector, (3) determining the difference data vector as the facial expression base for the facial expression;
determining a projection mapping matrix from the three-dimensional average face model to the two-dimensional face image based on internal face feature points of the two-dimensional face image and the three-dimensional average face model, wherein the internal face feature points include one or more of eyes, nose tip, mouth corner points, or eyebrows;
constructing a first three-dimensional face model corresponding to the two-dimensional face image based on the projection mapping matrix and feature vectors of a three-dimensional feature face space;
performing contour feature point fitting on the first three-dimensional face model based on face contour feature points of the two-dimensional face image;
after the contour feature point fitting, performing facial expression fitting on the first three-dimensional face model based on the internal face feature points of the two-dimensional face image and at least one of the facial expression bases;
determining an error between the two-dimensional face image and the three-dimensional face model; and
in response to the error being less than a preset error, adopting the first three-dimensional face model.
|