US 11,900,938 B2
Conversational agent response determined using a sentiment
Johnny Chen, Sunnyvale, CA (US); Thomas L. Dean, Los Altos Hills, CA (US); Qiangfeng Peter Lau, Mountain View, CA (US); Sudeep Gandhe, Sunnyvale, CA (US); and Gabriel Schine, Los Banos, CA (US)
Assigned to GOOGLE LLC, Mountain View, CA (US)
Filed by Google LLC, Mountain View, CA (US)
Filed on Jul. 18, 2022, as Appl. No. 17/867,161.
Application 17/867,161 is a continuation of application No. 16/939,298, filed on Jul. 27, 2020, granted, now 11,423,902.
Application 16/939,298 is a continuation of application No. 16/395,533, filed on Apr. 26, 2019, granted, now 10,726,840, issued on Jul. 28, 2020.
Application 16/395,533 is a continuation of application No. 15/966,975, filed on Apr. 30, 2018, granted, now 10,325,595, issued on Jun. 18, 2019.
Application 15/966,975 is a continuation of application No. 15/464,935, filed on Mar. 21, 2017, granted, now 9,997,158, issued on Jun. 12, 2018.
Application 15/464,935 is a continuation of application No. 15/228,488, filed on Aug. 4, 2016, granted, now 9,640,180, issued on May 2, 2017.
Application 15/228,488 is a continuation of application No. 14/447,737, filed on Jul. 31, 2014, granted, now 9,418,663, issued on Aug. 16, 2016.
Prior Publication US 2022/0351731 A1, Nov. 3, 2022
This patent is subject to a terminal disclaimer.
Int. Cl. G10L 15/00 (2013.01); G10L 15/22 (2006.01); G10L 17/22 (2013.01); H04L 67/104 (2022.01); G10L 15/26 (2006.01); G10L 13/00 (2006.01); G06F 16/332 (2019.01); G10L 15/18 (2013.01); G10L 13/033 (2013.01); G10L 15/30 (2013.01); G10L 13/08 (2013.01); G06F 21/62 (2013.01)
CPC G10L 15/22 (2013.01) [G06F 16/3329 (2019.01); G06F 21/6245 (2013.01); G10L 13/00 (2013.01); G10L 13/033 (2013.01); G10L 13/08 (2013.01); G10L 15/1815 (2013.01); G10L 15/1822 (2013.01); G10L 15/26 (2013.01); G10L 15/30 (2013.01); G10L 17/22 (2013.01); H04L 67/104 (2013.01); G10L 2015/223 (2013.01); G10L 2015/228 (2013.01)] 20 Claims
OG exemplary drawing
 
1. A method implemented by one or more processors of a user device, comprising:
receiving a spoken utterance from a user of the user device, the spoken utterance being directed to a first party computer-implemented agent that is executed at the user device;
determining whether the spoken utterance includes a request to interact with a third party computer-implemented agent, the third party computer-implemented agent being accessible by the user device over one or more networks; and
in response to determining that the spoken utterance includes the request to interact with the third party computer-implemented agent:
causing the third party computer-implemented agent to engage in a dialog with the user, wherein causing the third party computer-implemented agent to engage in the dialog with the user comprises:
causing the third party computer-implemented agent to generate third party computer-implement agent voice output based on a particular style of speech that is specified by third party computer-implemented agent data associated with the third party computer-implemented agent; and
causing the third party computer-implement agent voice output to be provided for presentation to the user at the user device.