| CPC G06T 19/20 (2013.01) [G06V 10/774 (2022.01); G06V 10/945 (2022.01); G06V 20/40 (2022.01); G06V 40/174 (2022.01); G06T 2200/24 (2013.01); G06T 2219/2021 (2013.01)] | 20 Claims |

|
1. A computer-implemented method comprising:
generating, utilizing a facial expression transfer model comprising an end-to-end network of one or more three-dimensional encoders and a facial expression generative adversarial neural network, a first target facial expression embedding for a first resolution from a target digital image portraying a face having a target facial expression;
generating, utilizing the facial expression transfer model comprising the one or more three-dimensional encoders, a first target pose embedding for the first resolution from the target digital image;
generating, utilizing the facial expression transfer model comprising the one or more three-dimensional encoders, a first source shape embedding for the first resolution from a source digital image portraying a source face having a source pose and a source facial expression;
generating a first combined embedding by concatenating the first target facial expression embedding for the first resolution from the target digital image, the first target pose embedding for the first resolution from the target digital image, and the first source shape embedding for the first resolution from the source digital image;
generating, utilizing the facial expression transfer model comprising the one or more three-dimensional encoders, a second target facial expression embedding for a second resolution from the target digital image;
generating, utilizing the facial expression transfer model comprising the one or more three-dimensional encoders, a second target pose embedding for the second resolution from the target digital image;
generating, utilizing the facial expression transfer model comprising the one or more three-dimensional encoders, a second source shape embedding for the second resolution from the source digital image;
generating a second combined embedding by concatenating the second target facial expression embedding for the second resolution from the target digital image, the second target pose embedding for the second resolution from the target digital image, and the second source shape embedding for the second resolution from the source digital image; and
generating, utilizing the facial expression generative adversarial neural network from the source digital image, a modified source digital image that portrays the source face of the source digital image with the target facial expression from the target digital image by conditioning a first layers of the facial expression generative adversarial neural network with the first combined embedding and conditioning a second layer of the facial expression generative adversarial neural network with the second combined embedding.
|