US 12,332,948 B2
In-conversation search
Maryam Garrett, Boston, MA (US); and Richard A. Miner, Boston, MA (US)
Assigned to Google LLC, Mountain View, CA (US)
Filed by Google LLC, Mountain View, CA (US)
Filed on Aug. 11, 2023, as Appl. No. 18/448,684.
Application 18/448,684 is a continuation of application No. 17/645,730, filed on Dec. 22, 2021, granted, now 11,755,666.
Application 17/645,730 is a continuation of application No. 16/807,555, filed on Mar. 3, 2020, granted, now 11,232,162, issued on Jan. 25, 2022.
Application 16/807,555 is a continuation of application No. 15/340,020, filed on Nov. 1, 2016, granted, now 10,621,243, issued on Apr. 14, 2020.
Application 15/340,020 is a continuation of application No. 14/684,744, filed on Apr. 13, 2015, granted, now 9,514,227, issued on Dec. 6, 2016.
Application 14/684,744 is a continuation of application No. 12/398,297, filed on Mar. 5, 2009, granted, now 9,031,216, issued on May 12, 2015.
Prior Publication US 2023/0385343 A1, Nov. 30, 2023
Int. Cl. G06F 7/00 (2006.01); G06F 16/3332 (2025.01); G06F 16/951 (2019.01); G10L 13/08 (2013.01); G10L 15/08 (2006.01); G10L 15/22 (2006.01); G10L 15/26 (2006.01); G10L 15/30 (2013.01); H04L 12/18 (2006.01); H04L 51/02 (2022.01); H04L 51/18 (2022.01); H04M 3/56 (2006.01); H04L 51/04 (2022.01)
CPC G06F 16/951 (2019.01) [G06F 16/3334 (2019.01); G10L 13/08 (2013.01); G10L 15/08 (2013.01); G10L 15/22 (2013.01); G10L 15/26 (2013.01); G10L 15/30 (2013.01); H04L 12/1827 (2013.01); H04L 51/02 (2013.01); H04L 51/18 (2013.01); H04M 3/56 (2013.01); G10L 2015/088 (2013.01); G10L 2015/223 (2013.01); H04L 51/04 (2013.01)] 20 Claims
OG exemplary drawing
 
1. A computer-implemented method executed on data processing hardware causes the data processing hardware to perform operations comprising:
while operating in a passive monitoring mode:
receiving a voice input spoken by a user, the voice input comprising a particular keyphrase followed by a plurality of words and a keyword subsequent to the plurality of words, the keyword indicating that the user has finished speaking a query characterized by the plurality of words; and
without recognizing the content of the voice input spoken by the user, determining that the voice input includes the particular keyphrase;
in response to determining that the voice input includes the particular keyphrase, transitioning from the passive monitoring mode to an active mode and invoking a speech-to-text converter to:
convert the plurality of words in the voice input into a string of text until the keyword is spoken; and
submit the string of text as the query to a network-connected service; and
receiving, from the network-connected service, results related to the query.