US 11,893,669 B2
Development platform for digital humans
Abhijit Z. Bendale, Campbell, CA (US); Pranav K. Mistry, Saratoga, CA (US); Bola Yoo, Seoul (KR); Kijeong Kwon, Seoul (KR); Simon Gibbs, San Jose, CA (US); Anil Unnikrishnan, Los Gatos, CA (US); and Link Huang, Mountain View, CA (US)
Assigned to SAMSUNG ELECTRONICS CO., LTD., Gyeonggi-Do (KR)
Filed by SAMSUNG ELECTRONICS CO., LTD., Gyeonggi-do (KR)
Filed on Jan. 7, 2022, as Appl. No. 17/571,099.
Claims priority of provisional application 63/135,855, filed on Jan. 11, 2021.
Claims priority of provisional application 63/135,526, filed on Jan. 8, 2021.
Claims priority of provisional application 63/135,516, filed on Jan. 8, 2021.
Claims priority of provisional application 63/135,505, filed on Jan. 8, 2021.
Prior Publication US 2022/0222883 A1, Jul. 14, 2022
Int. Cl. G06T 13/00 (2011.01); G06V 40/20 (2022.01); G10L 15/16 (2006.01); G06T 13/40 (2011.01); G10L 13/02 (2013.01); G10L 25/63 (2013.01); G10L 15/22 (2006.01); H05B 47/12 (2020.01); H05B 47/125 (2020.01); G06V 40/16 (2022.01); G06F 3/01 (2006.01); G06F 3/16 (2006.01); G06F 3/147 (2006.01); G10L 15/06 (2013.01); G09F 13/04 (2006.01)
CPC G06T 13/00 (2013.01) [G06F 3/011 (2013.01); G06F 3/147 (2013.01); G06F 3/16 (2013.01); G06T 13/40 (2013.01); G06V 40/174 (2022.01); G06V 40/20 (2022.01); G09F 13/049 (2021.05); G10L 13/02 (2013.01); G10L 15/063 (2013.01); G10L 15/16 (2013.01); G10L 15/22 (2013.01); G10L 25/63 (2013.01); H05B 47/12 (2020.01); H05B 47/125 (2020.01); G06F 3/012 (2013.01); G06F 3/017 (2013.01); G06F 2203/011 (2013.01); G10L 2015/0638 (2013.01); G10L 2015/227 (2013.01)] 20 Claims
OG exemplary drawing
 
1. A method, comprising:
selecting, with a development platform, a digital human;
receiving, with the development platform, user input specifying a dialog for the digital human and one or more behaviors for the digital human corresponding with one or more portions of the dialog on a common timeline;
wherein the dialog includes words to be spoken by the digital human in response to one or more predetermined cues received during an interactive dialog with an individual; and
generating scene data, with the development platform, by merging the one or more behaviors with the one or more portions of the dialog based on times of the one or more behaviors and the one or more portions of the dialog on the common timeline;
wherein the scene data is executable by a device to render the digital human and engage in the interactive dialog with the individual based on the one or more predetermined cues from the individual as received by the device during the interactive dialog.