US 12,125,129 B2
Facial animation transfer
Sergey Demyanov, Santa Monica, CA (US); Aleksei Podkin, Santa Monica, CA (US); Aliaksandr Siarohin, Santa Monica, CA (US); Aleksei Stoliar, Marina del Rey, CA (US); and Sergey Tulyakov, Marina del Rey, CA (US)
Assigned to Snap Inc., Santa Monica, CA (US)
Filed by Snap Inc., Santa Monica, CA (US)
Filed on Apr. 19, 2023, as Appl. No. 18/136,470.
Application 18/136,470 is a continuation of application No. 17/303,537, filed on Jun. 1, 2021, granted, now 11,645,798.
Claims priority of provisional application 63/032,858, filed on Jun. 1, 2020.
Prior Publication US 2023/0252704 A1, Aug. 10, 2023
This patent is subject to a terminal disclaimer.
Int. Cl. G06T 13/00 (2011.01); G06N 3/045 (2023.01); G06N 3/08 (2023.01); G06V 40/16 (2022.01)
CPC G06T 13/00 (2013.01) [G06N 3/045 (2023.01); G06N 3/08 (2013.01); G06V 40/171 (2022.01); G06V 40/174 (2022.01)] 20 Claims
OG exemplary drawing
 
1. A method comprising:
generating, by a computing device, a source image sequence using an image sensor of the computing device, the source image sequence comprising a plurality of source images depicting a source head and source face;
identifying driving image sequence data to modify face image feature data in the source image sequence, the driving image sequence data comprising an ordered set of image arrays that depicts a head in different head poses;
identifying an expression dataset to modify face image feature data in the source image sequence, the expression dataset comprising an unordered set of image arrays that depicts the head in different head poses and a face in different expressions;
generating, using an image transformation neural network, a modified source image sequence comprising a plurality of modified source images depicting modified versions of the source head and source face based on the driving image sequence and the expression dataset; and
storing the modified source image sequence on the computing device.