US 11,942,075 B2
System and method for automated digital twin behavior modeling for multimodal conversations
Rajasekhar Tumuluri, Bridgewater, NJ (US)
Assigned to Openstream Inc., Bridgewater, NJ (US)
Filed by Openstream Inc., Somerset, NJ (US)
Filed on Sep. 24, 2021, as Appl. No. 17/483,882.
Prior Publication US 2023/0099393 A1, Mar. 30, 2023
Int. Cl. G10L 15/06 (2013.01); G06N 10/00 (2022.01); G10L 15/18 (2013.01); G10L 15/25 (2013.01); G10L 15/30 (2013.01)
CPC G10L 15/063 (2013.01) [G06N 10/00 (2019.01); G10L 15/1815 (2013.01); G10L 15/1822 (2013.01); G10L 15/25 (2013.01); G10L 15/30 (2013.01)] 12 Claims
OG exemplary drawing
 
1. A method for interactive multimodal conversation, the method comprising:
receiving multimodal utterances as query input from a user's conversation at a computing device;
parsing the multimodal utterances for content;
recognizing one or more multimodal entities from the parsed content;
extracting the one or more multimodal entities;
determining semantical, syntactical, and structural relationships among the one or more multimodal entities;
determining one or more social and functional elements from the one or more multimodal entities;
generating at least one attentional element from the one or more social and functional elements;
shifting control, when triggered by the at least one attentional element, to one of a virtual human clone agent or a human agent based on an intent of the query input and context of the user's conversation; and
providing one or more responses to the user's conversation by interacting with a knowledge base when responding via the virtual human clone agent or a direct human agent response.