US 12,437,764 B2
Initializing non-assistant background actions, via an automated assistant, while accessing a non-assistant application
Denis Burakov, Zurich (CH); Behshad Behzadi, Freienbach (CH); Mario Bertschler, Zurich (CH); Bohdan Vlasyuk, Zurich (CH); Daniel Cotting, Islisberg (CH); Michael Golikov, Merlischachen (CH); Lucas Mirelmann, Zurich (CH); Steve Cheng, Los Altos, CA (US); Sergey Nazarov, Zurich (CH); Zaheed Sabur, Baar (CH); Marcin Nowak-Przygodzki, Bäch (CH); Mugurel Ionut Andreica, Adliswil (CH); and Radu Voroneanu, Zurich (CH)
Assigned to GOOGLE LLC, Mountain View, CA (US)
Filed by GOOGLE LLC, Mountain View, CA (US)
Filed on Feb. 12, 2024, as Appl. No. 18/439,411.
Application 18/439,411 is a continuation of application No. 17/588,481, filed on Jan. 31, 2022, granted, now 12,106,759.
Application 17/588,481 is a continuation of application No. 16/614,224, granted, now 11,238,868, issued on Feb. 1, 2022, previously published as PCT/US2019/036932, filed on Jun. 13, 2019.
Claims priority of provisional application 62/843,987, filed on May 6, 2019.
Prior Publication US 2024/0185857 A1, Jun. 6, 2024
This patent is subject to a terminal disclaimer.
Int. Cl. G10L 15/26 (2006.01); G06F 3/16 (2006.01); G10L 15/22 (2006.01)
CPC G10L 15/26 (2013.01) [G06F 3/167 (2013.01); G10L 15/22 (2013.01); G10L 2015/223 (2013.01)] 18 Claims
OG exemplary drawing
 
1. A system comprising:
memory storing instructions;
one or more processors operable to execute the instructions to:
determine that a user has provided, at a computing device, a spoken utterance that is directed to an automated assistant but does not explicitly identify any application that is accessible via the computing device,
wherein the spoken utterance is received at an automated assistant interface of the computing device, and
wherein the automated assistant is a separate application from an application;
access, based on determining that the user has provided the spoken utterance that is directed to the automated assistant, application data characterizing multiple different actions capable of being performed by the application;
determine, based on the application data, a correlation between content of the spoken utterance provided by the user and the application data;
in response to determining the correlation between the content of the spoken utterance provided by the user and the application data:
select, based on the content of the spoken utterance, an action from the multiple different actions characterized by the application data; and
cause the application to perform the selected action.