| CPC G06F 3/043 (2013.01) [G06F 1/163 (2013.01); G10L 15/22 (2013.01); G10L 21/0208 (2013.01); G10L 2021/02087 (2013.01)] | 20 Claims |

|
1. A method implemented by one or more processors, the method comprising:
processing contextual data that includes a type of device corresponding to a computing device;
determining, based on processing the contextual data, including the type of device, one or more rendering parameters for an acoustic signal that can be captured by a microphone of the computing device;
causing an output interface of a computing device to render the acoustic signal with the one or more rendering parameters determined based on processing the contextual data,
wherein the computing device includes an automated assistant application that provides access to an automated assistant that is responsive to natural language input from a user;
processing audio data, that is captured by the microphone of the computing device, and that characterizes at least a portion of the acoustic signal, that is rendered by the output interface as the acoustic signal is being captured by the microphone of the computing device,
wherein the audio data includes an instance of data characterizing a change to the acoustic signal caused by the user directly touching an area of a housing of the computing device;
determining, based on processing the audio data, that the instance of data characterizing the change to the acoustic signal corresponds to a touch gesture for invoking the automated assistant,
wherein the touch gesture is performed when the user directly touches the area of the housing that does not include a touch display interface; and
causing, based on determining that the change to the acoustic signal corresponds to the touch gesture, the automated assistant application to initialize in furtherance of receiving a subsequent natural language input from the user via the microphone or another computing device.
|