US 11,741,616 B2
Expression transfer across telecommunications networks
Thomas Yamasaki, Anaheim Hills, CA (US); Rocky Chau-Hsiung Lin, Cupertino, CA (US); and Koichiro Kanda, San Jose, CA (US)
Assigned to CONNECTIVITY LABS INC., Cupertino, CA (US)
Filed by Connectivity Labs Inc., Cupertino, CA (US)
Filed on Apr. 20, 2021, as Appl. No. 17/235,631.
Application 17/235,631 is a continuation of application No. 16/298,994, filed on Mar. 11, 2019, granted, now 10,984,537.
Application 16/298,994 is a continuation of application No. 16/001,714, filed on Jun. 6, 2018, granted, now 10,229,507, issued on Mar. 12, 2019.
Application 16/001,714 is a continuation of application No. 15/793,478, filed on Oct. 25, 2017, granted, now 9,996,940, issued on Jun. 12, 2018.
Prior Publication US 2021/0241465 A1, Aug. 5, 2021
This patent is subject to a terminal disclaimer.
Int. Cl. G06T 7/246 (2017.01); G06T 1/00 (2006.01); G06T 11/60 (2006.01); G06T 13/80 (2011.01); H04N 7/14 (2006.01); G06T 11/00 (2006.01)
CPC G06T 7/246 (2017.01) [G06T 1/0007 (2013.01); G06T 11/00 (2013.01); G06T 11/60 (2013.01); G06T 13/80 (2013.01); H04N 7/147 (2013.01); G06T 2207/30201 (2013.01)] 19 Claims
OG exemplary drawing
 
1. A method comprising:
transmitting an avatar to a destination device, the avatar having been generated based on a face of a person;
capturing an image of the face of the person on a source device;
calculating expression information based on the image of the face of the person, wherein the expression information approximates an expression on the face of the person;
transmitting the expression information from the source device to the destination device;
animating the avatar on a display component of the destination device using the expression information;
transmitting a second avatar to the source device, the second avatar having been generated based on a face of a second person;
capturing an image of the face of the second person on the destination device;
calculating second expression information based on the image of the face of the second person, wherein the second expression information approximates an expression on the face of the second person;
transmitting the second expression information from the destination device to the source device; and
animating the second avatar on a display component of the source device using the second expression information.