US 12,148,064 B2
Facial synthesis in augmented reality content for advertisements
Alexandr Marinenko, Lehi, UT (US); Aleksandr Mashrabov, Los Angeles, CA (US); and Alexey Pchelnikov, London (GB)
Assigned to Snap Inc., Santa Monica, CA (US)
Filed by Snap Inc., Santa Monica, CA (US)
Filed on Mar. 24, 2022, as Appl. No. 17/703,716.
Claims priority of provisional application 63/200,878, filed on Mar. 31, 2021.
Prior Publication US 2022/0319060 A1, Oct. 6, 2022
Int. Cl. G06T 11/00 (2006.01); G06Q 30/0241 (2023.01); G06Q 50/00 (2024.01)
CPC G06T 11/00 (2013.01) [G06Q 30/0276 (2013.01); G06Q 50/01 (2013.01)] 14 Claims
OG exemplary drawing
 
1. A method, comprising:
receiving, by one or more hardware processors, frames of a source media content, the frames of the source media content including representations of a head and a face of a source actor;
generating, by the one or more hardware processors and based at least in part on the frames of the source media content, sets of source pose parameters, the sets of the source pose parameters comprise positions of the representations of the head of the source actor and facial expressions of the source actor in the frames of the source media content, wherein generating the sets of source pose parameters is performed using a shared encoder network, the shared encoder network comprises a first deep convolutional neural network;
receiving, by the one or more hardware processors, at least one target image, the at least one target image including representations of a target head and a target face of a target entity;
generating based at least in part on the sets of source pose parameters, by the one or more hardware processors, an output media content, each frame of the output media content including an image of the target face in at least one frame of the output media content, the image of the target face being modified based on at least one of the sets of the source pose parameters to mimic at least one of the positions of the head of the source actor and at least one of the facial expressions of the source actor, wherein generating the output media content is performed using a first decoder network, the first decoder network comprises a second deep convolutional neural network;
providing, by the one or more hardware processors, an online advertisement based at least in part on the output media content for display on a computing device; and
causing playback of the online advertisement based on a duration of the online advertisement in a messaging client application on a client device, wherein the online advertisement comprises a video, the video not being able to be skipped or paused during playback of the video, the online advertisement further comprises an image, the image not being able to be hidden during the duration of the online advertisement, the image being selected based on respective properties comprising a name, brand name, and headline.