CPC G06T 13/00 (2013.01) [G06F 3/011 (2013.01); G06F 3/147 (2013.01); G06F 3/16 (2013.01); G06T 13/40 (2013.01); G06V 40/174 (2022.01); G06V 40/20 (2022.01); G09F 13/049 (2021.05); G10L 13/02 (2013.01); G10L 15/063 (2013.01); G10L 15/16 (2013.01); G10L 15/22 (2013.01); G10L 25/63 (2013.01); H05B 47/12 (2020.01); H05B 47/125 (2020.01); G06F 3/012 (2013.01); G06F 3/017 (2013.01); G06F 2203/011 (2013.01); G10L 2015/0638 (2013.01); G10L 2015/227 (2013.01)] | 20 Claims |
1. A method, comprising:
determining, with an interactive system, a contextual response to a user input;
generating, with the interactive system, a digital human; and
conveying, with the digital human, the contextual response to the user in real time, wherein the digital human is configured to convey the contextual response with a predetermined behavior corresponding to the contextual response;
wherein the predetermined behavior includes a sequence of keypoints corresponding to visual representations of the digital human and to image frames, wherein at least one keypoint of the sequence of keypoints anchors the predetermined behavior with a portion of audio of the contextual response, and wherein the predetermined behavior includes an eye movement of the digital human.
|