US 12,315,057 B2
Avatar facial expressions based on semantical context
Scott Beith, Carlsbad, CA (US); Suzana Arellano, San Diego, CA (US); Michel Adib Sarkis, San Diego, CA (US); Matthew Fischler, San Diego, CA (US); Ke-Li Cheng, San Diego, CA (US); and Stephane Villette, San Diego, CA (US)
Assigned to QUALCOMM Incorporated, San Diego, CA (US)
Filed by QUALCOMM Incorporated, San Diego, CA (US)
Filed on Sep. 7, 2022, as Appl. No. 17/930,244.
Prior Publication US 2024/0078732 A1, Mar. 7, 2024
Int. Cl. G06T 13/40 (2011.01); G06F 3/01 (2006.01); G06V 20/40 (2022.01); G06V 40/16 (2022.01)
CPC G06T 13/40 (2013.01) [G06F 3/012 (2013.01); G06V 20/41 (2022.01); G06V 40/174 (2022.01)] 30 Claims
OG exemplary drawing
 
1. A device comprising:
a memory configured to store sensor data including a conversation represented in audio data; and
one or more processors coupled to the memory, the one or more processors configured to:
process the sensor data to determine;
a semantical context associated with the sensor data, the semantical context based at least in part on a social context associated with the conversation and a type of relationship between a user and one or more participants in the conversation; and
a magnitude of an emotion that corresponds to the semantical context; and
generate adjusted face data for the user based on the semantical context and face data, the adjusted face data including an avatar facial expression based on the emotion and the magnitude of the emotion.