US 12,243,529 B2
Conversational agent response determined using a sentiment
Johnny Chen, Sunnyvale, CA (US); Thomas L. Dean, Los Altos Hills, CA (US); Qiangfeng Peter Lau, Mountain View, CA (US); Sudeep Gandhe, Sunnyvale, CA (US); and Gabriel Schine, Los Banos, CA (US)
Assigned to GOOGLE LLC, Mountain View, CA (US)
Filed by GOOGLE LLC, Mountain View, CA (US)
Filed on Jan. 4, 2024, as Appl. No. 18/404,452.
Application 18/404,452 is a continuation of application No. 17/867,161, filed on Jul. 18, 2022, granted, now 11,900,938.
Application 17/867,161 is a continuation of application No. 16/939,298, filed on Jul. 27, 2020, granted, now 11,423,902, issued on Aug. 23, 2022.
Application 16/939,298 is a continuation of application No. 16/395,533, filed on Apr. 26, 2019, granted, now 10,726,840, issued on Jul. 28, 2020.
Application 16/395,533 is a continuation of application No. 15/966,975, filed on Apr. 30, 2018, granted, now 10,325,595, issued on Jun. 18, 2019.
Application 15/966,975 is a continuation of application No. 15/464,935, filed on Mar. 21, 2017, granted, now 9,997,158, issued on Jun. 12, 2018.
Application 15/464,935 is a continuation of application No. 15/228,488, filed on Aug. 4, 2016, granted, now 9,640,180, issued on May 2, 2017.
Application 15/228,488 is a continuation of application No. 14/447,737, filed on Jul. 31, 2014, granted, now 9,418,663, issued on Aug. 16, 2016.
Prior Publication US 2024/0135928 A1, Apr. 25, 2024
This patent is subject to a terminal disclaimer.
Int. Cl. G10L 15/00 (2013.01); G06F 16/3329 (2025.01); G06F 21/62 (2013.01); G10L 13/00 (2006.01); G10L 13/033 (2013.01); G10L 13/08 (2013.01); G10L 15/18 (2013.01); G10L 15/22 (2006.01); G10L 15/26 (2006.01); G10L 15/30 (2013.01); G10L 17/22 (2013.01); H04L 67/104 (2022.01)
CPC G10L 15/22 (2013.01) [G06F 16/3329 (2019.01); G06F 21/6245 (2013.01); G10L 13/00 (2013.01); G10L 13/033 (2013.01); G10L 13/08 (2013.01); G10L 15/1815 (2013.01); G10L 15/1822 (2013.01); G10L 15/26 (2013.01); G10L 15/30 (2013.01); G10L 17/22 (2013.01); H04L 67/104 (2013.01); G10L 2015/223 (2013.01); G10L 2015/228 (2013.01)] 18 Claims
OG exemplary drawing
 
1. A computer-implemented method comprising:
receiving, by a first computer-implemented agent for a user device, a text representation of an utterance that includes a command, wherein the text representation of the utterance is determined based on a speech encoding of the utterance; and
in response to processing the text representation of the utterance to determine words included in the utterance:
determining, from among a plurality of different demographics, a particular demographic of a speaker of the utterance;
in response to determining the particular demographic, selecting, from a plurality of different computer-implemented agents, a particular computer- implemented agent based on the particular demographic associated with the particular computer-implemented agent matching the particular demographic, wherein each agent is associated with a respective one of the plurality of different demographics; and
causing the particular computer-implemented agent to provide an interface for processing subsequent utterances spoken by the speaker.