US 11,875,600 B2
Facial synthesis in augmented reality content for online communities
Roman Golobokov, London (GB); Alexandr Marinenko, Lehi, UT (US); Aleksandr Mashrabov, Los Angeles, CA (US); Aleksei Bromot, London (GB); and Grigoriy Tkachenko, London (GB)
Assigned to Snap Inc., Santa Monica, CA (US)
Filed by Snap Inc., Santa Monica, CA (US)
Filed on Mar. 29, 2022, as Appl. No. 17/706,830.
Claims priority of provisional application 63/168,996, filed on Mar. 31, 2021.
Prior Publication US 2022/0319230 A1, Oct. 6, 2022
Int. Cl. G06V 40/16 (2022.01); G06T 19/00 (2011.01); G06T 13/40 (2011.01); G06T 17/00 (2006.01)
CPC G06V 40/168 (2022.01) [G06T 19/006 (2013.01); G06V 40/174 (2022.01); G06T 2200/24 (2013.01)] 18 Claims
OG exemplary drawing
 
1. A method, comprising:
capturing, by one or more hardware processors, image data by a client device, the captured image data comprising a target face of a target actor and facial expressions of the target actor, the facial expressions of the target actor including lip movements;
generating, by the one or more hardware processors and based at least in part on frames of a source media content, sets of source pose parameters, the sets of the source pose parameters comprising positions of representations of a head of a source actor and facial expressions of the source actor in the frames of the source media content, the source media content comprising a source video with the facial expressions of the source actor that are different than the captured image data including the facial expressions of the target actor;
providing, by the one or more hardware processors, for display a set of selectable graphical items, the set of selectable graphical items comprising different graphical representations of a set of facial expressions, each of the selectable graphical items comprising a particular representation of a facial expression among the set of facial expressions;
receiving, by the one or more hardware processors, a selection of a particular selectable graphical item of a particular facial expression from the different graphical representations of the set of facial expressions;
determining the particular facial expression corresponding to the particular selectable graphical item that has been selected;
performing a modification of the head of the target actor and the facial expressions of the target actor in the captured image data based on the particular facial expression;
generating, based at least in part on sets of the source pose parameters and the modification of the head of the target actor and the facial expressions of the target actor in the captured image data, by the one or more hardware processors, an output media content, each frame of the output media content including an image of the target face, from the captured image data, in at least one frame of the output media content, the image of the target face being modified based on at least one of the sets of the source pose parameters to mimic at least one of positions of the head of the source actor in the frames of the source media content and at least the particular facial expression from the set of facial expressions; and
providing, by the one or more hardware processors, augmented reality content based at least in part on the output media content for display on a computing device.