US 12,080,297 B2
Systems and methods for adaptive human-machine interaction and automatic behavioral assessment
Stefan Scherer, Santa Monica, CA (US); Aubrey Schick, Berkeley, CA (US); Nicole Marie Hurst, Los Angeles, CA (US); Sara Jenny Palencia, Los Angeles, CA (US); and Josh Anon, Los Angeles, CA (US)
Assigned to Embodied, Inc., Pasadena, CA (US)
Filed by Embodied, Inc., Pasadena, CA (US)
Filed on Jan. 16, 2023, as Appl. No. 18/097,372.
Application 18/097,372 is a continuation of application No. 16/675,640, filed on Nov. 6, 2019, granted, now 11,557,297.
Claims priority of provisional application 62/758,361, filed on Nov. 9, 2018.
Prior Publication US 2023/0215436 A1, Jul. 6, 2023
This patent is subject to a terminal disclaimer.
Int. Cl. G10L 15/22 (2006.01); G06F 16/9032 (2019.01); G10L 15/26 (2006.01)
CPC G10L 15/26 (2013.01) [G06F 16/90332 (2019.01); G10L 15/22 (2013.01)] 16 Claims
OG exemplary drawing
 
1. A method for controlling a robot comprising:
a conversation system controlling multi-modal output of the robot in accordance with selected conversational content, the multi-modal output controlled by the conversation system including a speaker to output sound, an arm assembly to perform interactive gestures, and a face-like display screen to display images to control facial expressions;
a control system providing first event information to the conversation system based on sensing of a human interaction participant by at least one sensor of the robot during processing of the selected conversational content by the conversation system;
the conversation system controlling the multi-modal output of the robot in accordance with the first event information;
an evaluation system updating conversational content used by the conversation system based on the sensing of the human interaction participant by the at least one sensor of the robot;
the evaluation system providing evaluation results to an external client device based on the sensing of the human interaction participant; and
a goal authoring system providing a user interface to a client device, wherein the user interface includes at least one field for receiving user-input specifying at least a first goal, at least one field for receiving user-input specifying a goal evaluation module of the robot that is to be used to evaluate the first goal, at least one field for receiving user-input specifying at least a first goal level of the first goal, and at least one field for receiving user-input specifying at least a first human interaction participant support level of the first goal level.