CPC G10L 15/26 (2013.01) [G06F 16/90332 (2019.01); G10L 15/22 (2013.01)] | 16 Claims |
1. A method for controlling a robot comprising:
a conversation system controlling multi-modal output of the robot in accordance with selected conversational content, the multi-modal output controlled by the conversation system including a speaker to output sound, an arm assembly to perform interactive gestures, and a face-like display screen to display images to control facial expressions;
a control system providing first event information to the conversation system based on sensing of a human interaction participant by at least one sensor of the robot during processing of the selected conversational content by the conversation system;
the conversation system controlling the multi-modal output of the robot in accordance with the first event information;
an evaluation system updating conversational content used by the conversation system based on the sensing of the human interaction participant by the at least one sensor of the robot;
the evaluation system providing evaluation results to an external client device based on the sensing of the human interaction participant; and
a goal authoring system providing a user interface to a client device, wherein the user interface includes at least one field for receiving user-input specifying at least a first goal, at least one field for receiving user-input specifying a goal evaluation module of the robot that is to be used to evaluate the first goal, at least one field for receiving user-input specifying at least a first goal level of the first goal, and at least one field for receiving user-input specifying at least a first human interaction participant support level of the first goal level.
|