US 12,105,869 B2
Information processing apparatus and information processing method
Kenji Sugihara, Tokyo (JP); and Mari Saito, Tokyo (JP)
Assigned to SONY CORPORATION, Tokyo (JP)
Appl. No. 17/250,621
Filed by SONY CORPORATION, Tokyo (JP)
PCT Filed Aug. 7, 2019, PCT No. PCT/JP2019/031136
§ 371(c)(1), (2) Date Feb. 11, 2021,
PCT Pub. No. WO2020/039933, PCT Pub. Date Feb. 27, 2020.
Claims priority of application No. 2018-157423 (JP), filed on Aug. 24, 2018.
Prior Publication US 2021/0165484 A1, Jun. 3, 2021
Int. Cl. G06F 3/01 (2006.01); H04N 13/268 (2018.01); H04N 13/00 (2018.01)
CPC G06F 3/013 (2013.01) [G06F 3/017 (2013.01); H04N 13/268 (2018.05); H04N 2013/0081 (2013.01)] 18 Claims
OG exemplary drawing
 
1. An information processing device, comprising:
a processor configured to:
acquire movement information about a gesture by a user;
acquire information about a gazing point of the user;
control a display device based on the movement information;
cause the display device to display a first virtual object including information relating to a target object in a first region related to the target object, wherein the first region is on the display device;
vary the display of the first virtual object based on a position of the gazing point in duration for which the user makes the gesture;
cause the display device to increase an information amount of the first virtual object based on presence of the gazing point in the first region while the gesture is made by the user, wherein
the increase in the information amount of the first virtual object corresponds to a switch of a texture of the first virtual object from a still image to a moving image, and
the target object is a second virtual object;
cause the display device to display the first virtual object behind the second virtual object as viewable from the user;
cause the display device to display a fourth virtual object behind the second virtual object as viewed from the user; and
move the fourth virtual object to a position where the first virtual object is located until the gesture of the user is started during movement of the first virtual object and the second virtual object in a depth direction if the gazing point of the user does not remain on one of the first virtual object and the second virtual object while the gesture is made by the user.