US 12,190,892 B2
Selectively storing, with multiple user accounts and/or to a shared assistant device: speech recognition biasing, NLU biasing, and/or other data
Matthew Sharifi, Kilchberg (CH); and Victor Carbune, Zurich (CH)
Assigned to GOOGLE LLC, Mountain View, CA (US)
Filed by GOOGLE LLC, Mountain View, CA (US)
Filed on Oct. 18, 2023, as Appl. No. 18/381,417.
Application 18/381,417 is a continuation of application No. 17/982,863, filed on Nov. 8, 2022, granted, now 11,817,106.
Application 17/982,863 is a continuation of application No. 17/005,180, filed on Aug. 27, 2020, granted, now 11,532,313, issued on Dec. 20, 2022.
Prior Publication US 2024/0046936 A1, Feb. 8, 2024
This patent is subject to a terminal disclaimer.
Int. Cl. G10L 17/22 (2013.01); G06V 40/16 (2022.01); G10L 15/07 (2013.01); G10L 15/18 (2013.01); G10L 15/22 (2006.01); G10L 17/04 (2013.01); G10L 17/06 (2013.01); G10L 17/00 (2013.01)
CPC G10L 17/22 (2013.01) [G06V 40/172 (2022.01); G10L 15/07 (2013.01); G10L 15/18 (2013.01); G10L 15/22 (2013.01); G10L 17/04 (2013.01); G10L 17/06 (2013.01); G10L 2015/223 (2013.01); G10L 17/00 (2013.01)] 19 Claims
OG exemplary drawing
 
1. A method implemented by one or more processors of a shared assistant device, the method comprising:
receiving, via one or more microphones of the shared assistant device, audio data that captures a spoken utterance of a user;
generating, based on processing the audio data using a local speech-to-text (STT) engine of the shared assistant device, a transcription that corresponds to the spoken utterance captured in the audio data;
resolving, based on processing the transcription using a local natural language understanding (NLU) engine of the shared assistant device, an assistant action to perform in response to receiving the spoken utterance;
causing, in response to receiving the spoken utterance, performance of the assistant action resolved based on processing the transcription that corresponds to the spoken utterance;
determining whether to store, locally at the shared assistant device, one or more NLU biasing parameters that are based on the assistant action resolved locally at the shared assistant device using the local NLU engine;
in response to determining to store the one or more NLU biasing parameters locally at the shared assistant device:
storing the one or more NLU biasing parameters locally at the shared assistant device;
wherein storing the one or more NLU biasing parameters locally at the shared assistant device causes future spoken utterances, from any user and received at the shared assistant device, to be processed by the local NLU engine using the one or more NLU biasing parameters.