US 12,122,039 B2
Information processing device and information processing method
Tomoo Mizukami, Tokyo (JP); Kei Yoshinaka, Tokyo (JP); Naoki Yuasa, Chiba (JP); Izumi Kawanishi, Tokyo (JP); Hiroto Watanabe, Tokyo (JP); Junichirou Sakata, Tokyo (JP); Tohru Kurata, Tokyo (JP); and Yuichiro Tou, Tokyo (JP)
Assigned to SONY CORPORATION, Tokyo (JP)
Appl. No. 16/769,989
Filed by SONY CORPORATION, Tokyo (JP)
PCT Filed Sep. 18, 2018, PCT No. PCT/JP2018/034482
§ 371(c)(1), (2) Date Jun. 4, 2020,
PCT Pub. No. WO2019/123744, PCT Pub. Date Jun. 27, 2019.
Claims priority of application No. 2017-246738 (JP), filed on Dec. 22, 2017.
Prior Publication US 2021/0197393 A1, Jul. 1, 2021
Int. Cl. B25J 11/00 (2006.01); A63H 3/36 (2006.01); G06F 3/04842 (2022.01)
CPC B25J 11/001 (2013.01) [A63H 3/365 (2013.01); G06F 3/04842 (2013.01); A63H 2200/00 (2013.01)] 18 Claims
OG exemplary drawing
 
1. An information processing device, comprising:
a processor configured to:
control a screen to display an avatar;
control operation of an application, wherein the application is related to communication between an autonomous operation body and a user;
control the autonomous operation body to reflect a first operation performed by the user in the application;
control, based on the first operation performed by the user, a first operation of the avatar that imitates the autonomous operation body;
control the avatar to execute a second operation and a third operation;
detect user information based on the execution of the second operation and the third operation;
select, as a first reward, one of the second operation or the third operation based on the detected user information,
wherein the selected one of the second operation or the third operation includes a functional enhancement of the autonomous operation body;
output the first reward to the avatar;
control the autonomous operation body to reflect the first reward obtained by the avatar, wherein
the selected one of the second operation or the third operation is inexecutable by the autonomous operation body before the output of the first reward, and
the selected one of the second operation or the third operation is executable by the autonomous operation body after the output of the first reward;
control, based on a user operation on the screen, the avatar to execute a fourth operation;
control the screen to display an effect that indicates an emotional expression of the avatar as a reaction to the user operation;
control the screen to display a physical condition associated with the autonomous operation body, wherein
the physical condition includes error information, and
the error information includes at least a plurality of errors associated with a plurality of actuators included in the autonomous operation body; and
set a background of the screen as a gradation of two or more colors of a plurality of colors, based on co-existence of two or more emotions of the autonomous operation body, wherein
each color of the plurality of colors is associated with a different emotion, and
the two or more colors correspond to the two or more emotions of the autonomous operation body.