CPC G06T 13/205 (2013.01) [G06T 13/40 (2013.01); G06T 17/20 (2013.01); G06T 19/006 (2013.01); G10L 21/14 (2013.01); G10L 2021/105 (2013.01)] | 20 Claims |
1. A computer-implemented method, comprising:
identifying, from an audio capture of a subject, an audio-correlated facial feature;
generating a first mesh for a lower portion of a face of the subject, based on the audio-correlated facial feature;
identifying an expression-like facial feature of the subject;
generating a second mesh for an upper portion of a face of the subject based on the expression-like facial feature;
forming a synthesized mesh with the first mesh and the second mesh;
determining a loss value of the synthesized mesh based on a ground truth image of the subject;
generating a three-dimensional model of the face of the subject with the synthesized mesh based on the loss value; and
providing the three-dimensional model of the face of the subject to a display in a client device running an immersive reality application that includes the subject.
|