CPC G06T 13/00 (2013.01) [G06F 3/011 (2013.01); G06F 3/147 (2013.01); G06F 3/16 (2013.01); G06T 13/40 (2013.01); G06V 40/174 (2022.01); G06V 40/20 (2022.01); G09F 13/049 (2021.05); G10L 13/02 (2013.01); G10L 15/063 (2013.01); G10L 15/16 (2013.01); G10L 15/22 (2013.01); G10L 25/63 (2013.01); H05B 47/12 (2020.01); H05B 47/125 (2020.01); G06F 3/012 (2013.01); G06F 3/017 (2013.01); G06F 2203/011 (2013.01); G10L 2015/0638 (2013.01); G10L 2015/227 (2013.01)] | 20 Claims |
1. A method, comprising:
selecting, with a development platform, a digital human;
receiving, with the development platform, user input specifying a dialog for the digital human and one or more behaviors for the digital human corresponding with one or more portions of the dialog on a common timeline;
wherein the dialog includes words to be spoken by the digital human in response to one or more predetermined cues received during an interactive dialog with an individual; and
generating scene data, with the development platform, by merging the one or more behaviors with the one or more portions of the dialog based on times of the one or more behaviors and the one or more portions of the dialog on the common timeline;
wherein the scene data is executable by a device to render the digital human and engage in the interactive dialog with the individual based on the one or more predetermined cues from the individual as received by the device during the interactive dialog.
|