US 12,223,229 B2
Using user input to adapt search results provided for presentation to the user
David Kogan, Natick, MA (US); and Bryan Christopher Horling, Belmont, MA (US)
Assigned to GOOGLE LLC, Mountain View, CA (US)
Filed by GOOGLE LLC, Mountain View, CA (US)
Filed on Jan. 3, 2024, as Appl. No. 18/403,313.
Application 18/403,313 is a continuation of application No. 17/363,350, filed on Jun. 30, 2021, granted, now 11,875,086.
Application 17/363,350 is a continuation of application No. 16/591,125, filed on Oct. 2, 2019, granted, now 11,074,038, issued on Jul. 27, 2021.
Application 16/591,125 is a continuation of application No. 15/252,031, filed on Aug. 30, 2016, granted, now 10,481,861, issued on Nov. 19, 2019.
Prior Publication US 2024/0168708 A1, May 23, 2024
This patent is subject to a terminal disclaimer.
Int. Cl. G06F 16/248 (2019.01); G06F 3/0484 (2022.01); G06F 3/16 (2006.01); G06F 16/28 (2019.01); G06F 16/332 (2019.01); G06F 16/338 (2019.01); G06F 16/951 (2019.01)
CPC G06F 3/167 (2013.01) [G06F 3/0484 (2013.01); G06F 16/248 (2019.01); G06F 16/285 (2019.01); G06F 16/3326 (2019.01); G06F 16/3329 (2019.01); G06F 16/338 (2019.01); G06F 16/951 (2019.01)] 20 Claims
OG exemplary drawing
 
1. A method implemented by one or more processors, comprising:
receiving a spoken input of a user, the spoken input being detected, via at least one microphone of a client computing device of the user, as part of a dialog between the user and an automated assistant implemented at least in part by one or more of the processors;
obtaining a search result that is responsive to the spoken input and that has an attribute, the attribute of the search result being one or more of: a name of an entity referenced in the search result, or a name of a source of the search result;
causing the search result to be visually rendered, via a display, that is of a limited size, of the client computing device, for presentation to the user;
in response to causing the search result to be visually rendered for presentation to the user, receiving further spoken input of the user, the further spoken input being detected, via the at least one microphone of the client computing device, as part of the dialog between the user and the automated assistant;
determining, based on processing the further spoken input, that the further spoken input references:
the attribute of the search result, and
a sentiment expressed by the user towards the attribute;
in response to determining that the further spoken input references the attribute of the search result and the sentiment expressed by the user towards the attribute:
determining, based on the attribute and the sentiment expressed by the user towards that attribute, whether to cause, as part of the dialog, an additional search result, that is also responsive to the spoken input and that also has the attribute, to be visually rendered, via the display, that is of the limited size, of the client computing device, for presentation to the user.