US 11,887,235 B2
Puppeteering remote avatar by facial expressions
Tarek Hefny, Redmond, WA (US); Nicholas Reiter, Mountain View, CA (US); Brandon Young, Mountain View, CA (US); Arun Kandoor, Santa Clara, CA (US); and Dillon Cower, Mountain View, CA (US)
Assigned to Google LLC, Mountain View, CA (US)
Filed by Google LLC, Mountain View, CA (US)
Filed on Nov. 23, 2022, as Appl. No. 18/058,621.
Application 18/058,621 is a continuation of application No. 17/052,161, granted, now 11,538,211, previously published as PCT/US2019/030218, filed on May 1, 2019.
Claims priority of provisional application 62/667,767, filed on May 7, 2018.
Prior Publication US 2023/0088308 A1, Mar. 23, 2023
Int. Cl. G06T 13/40 (2011.01); G06T 17/20 (2006.01); G06T 19/20 (2011.01); G06T 7/73 (2017.01); G06T 7/13 (2017.01); H04L 67/10 (2022.01)
CPC G06T 13/40 (2013.01) [G06T 7/13 (2017.01); G06T 7/73 (2017.01); G06T 17/20 (2013.01); G06T 19/20 (2013.01); G06T 2207/10024 (2013.01); G06T 2207/30201 (2013.01); H04L 67/10 (2013.01)] 18 Claims
OG exemplary drawing
 
1. A computer-implemented method when executed by data processing hardware causes the data processing hardware to perform operations comprising:
receiving, from a first user device associated with a first user, a first captured image comprising a first facial framework of a face of the first user and associated audio data spoken by the first user;
identifying, from the first facial framework, a facial cavity corresponding to at least one of an eye of the first user or a mouth of the first user;
rendering the facial cavity onto the first facial framework;
determining a facial texture corresponding to the face of the first user based on the first facial framework with the rendered facial cavity;
transmitting, to a second user device associated with a second user, the facial texture as a three-dimensional avatar corresponding to a virtual representation of the face of the first user and the associated audio data spoken by the first user, the second user device configured to synchronously display the three-dimensional avatar and output the associated audio data;
receiving, from the first user device, a second captured image comprising a second facial framework of the face of the first user;
updating the facial texture based on the second facial framework; and
transmitting, to the second user device, the updated facial texture as the three-dimensional avatar corresponding to the virtual representation of the face of the first user.