US 12,347,012 B2
Sentiment-based interactive avatar system for sign language
Yusuf AbdElhakam AbdElkader Marey, Tulsa, OK (US); and Reda Harb, Tampa, FL (US)
Assigned to Adeia Guides Inc., San Jose, CA (US)
Filed by Rovi Guides, Inc., San Jose, CA (US)
Filed on Jan. 12, 2024, as Appl. No. 18/411,611.
Application 18/411,611 is a continuation of application No. 17/240,128, filed on Apr. 26, 2021, granted, now 11,908,056.
Prior Publication US 2024/0153186 A1, May 9, 2024
Int. Cl. G06T 13/40 (2011.01); G06F 40/47 (2020.01); G06T 13/20 (2011.01); G06V 40/16 (2022.01); G06V 40/20 (2022.01); G09B 21/00 (2006.01); G10L 15/18 (2013.01); G10L 15/22 (2006.01); G10L 21/10 (2013.01); G10L 25/63 (2013.01); G10L 21/06 (2013.01)
CPC G06T 13/40 (2013.01) [G06F 40/47 (2020.01); G06T 13/205 (2013.01); G06V 40/174 (2022.01); G06V 40/20 (2022.01); G09B 21/009 (2013.01); G10L 15/1815 (2013.01); G10L 15/22 (2013.01); G10L 21/10 (2013.01); G10L 25/63 (2013.01); G10L 2021/065 (2013.01)] 18 Claims
OG exemplary drawing
 
1. A method comprising:
capturing video data using a camera of a device and capturing audio data using a microphone of the device;
extracting a spoken word from the audio data and an image of a speaker who uttered the spoken word from the video data;
querying a sign language database to determine a translation of the spoken word to a sign language gesture;
identifying visual characteristics of the speaker based on the extracted image of the speaker who uttered the spoken word;
generating an avatar based on the identified visual characteristics of the speaker who uttered the spoken word;
identifying in a model database, a skeleton model representing the sign language gesture; and
generating for display an animation of the avatar that was generated based on the identified visual characteristics of the speaker who uttered the spoken word performing the sign language gesture by applying the skeleton model to the avatar.