| CPC G06F 21/32 (2013.01) [G06F 21/83 (2013.01); G06V 40/172 (2022.01); G10L 17/00 (2013.01)] | 20 Claims |

|
1. A computer-implemented method comprising:
determining a first user is within a field of view of a camera of a hardware device associated with a group profile corresponding to a plurality of user identifiers;
capturing, by the camera of the hardware device, image data of the first user;
performing, by the hardware device, facial recognition processing using the image data and profile data associated with the group profile;
determining, by the hardware device, based on the facial recognition processing, that the first user corresponds to a first user identifier of the plurality of user identifiers;
receiving first input data corresponding to a first touch input on a touch-sensitive screen of the hardware device;
determining the first input data corresponds to a first user interface element displayed on the touch-sensitive screen;
determining first data indicating that the hardware device received the first touch input corresponding to the first user interface element;
storing first association data representing an association between the first touch input and the first user identifier;
sending, via a computer network, from the hardware device to a virtual assistant system different from the hardware device, the first data, the first association data associating the first touch input and the first user identifier, the first user identifier, and a request to perform operations to determine a response to the first touch input based on the first user identifier;
receiving, by the hardware device and from the virtual assistant system, output data representing a response to the first touch input, wherein the response was personalized to the first user based on the first user identifier; and
presenting an output on the touch-sensitive screen based at least in part on the output data.
|