| CPC G06F 3/167 (2013.01) [G06F 3/0484 (2013.01); G06F 16/248 (2019.01); G06F 16/285 (2019.01); G06F 16/3326 (2019.01); G06F 16/3329 (2019.01); G06F 16/338 (2019.01); G06F 16/951 (2019.01)] | 20 Claims |

|
1. A method implemented by one or more processors, comprising:
receiving a spoken input of a user, the spoken input being detected, via at least one microphone of a client computing device of the user, as part of a dialog between the user and an automated assistant implemented at least in part by one or more of the processors;
obtaining a search result that is responsive to the spoken input and that has an attribute, the attribute of the search result being one or more of: a name of an entity referenced in the search result, or a name of a source of the search result;
causing the search result to be visually rendered, via a display, that is of a limited size, of the client computing device, for presentation to the user;
in response to causing the search result to be visually rendered for presentation to the user, receiving further spoken input of the user, the further spoken input being detected, via the at least one microphone of the client computing device, as part of the dialog between the user and the automated assistant;
determining, based on processing the further spoken input, that the further spoken input references:
the attribute of the search result, and
a sentiment expressed by the user towards the attribute;
in response to determining that the further spoken input references the attribute of the search result and the sentiment expressed by the user towards the attribute:
determining, based on the attribute and the sentiment expressed by the user towards that attribute, whether to cause, as part of the dialog, an additional search result, that is also responsive to the spoken input and that also has the attribute, to be visually rendered, via the display, that is of the limited size, of the client computing device, for presentation to the user.
|