US 11,747,897 B2
Data processing apparatus and method of using gaze data to generate images
Fabio Cappello, London (GB); and Maria Chiara Monti, London (GB)
Assigned to Sony Interactive Entertainment Inc., Tokyo (JP)
Filed by Sony Interactive Entertainment Inc., Tokyo (JP)
Filed on Jun. 28, 2021, as Appl. No. 17/360,215.
Claims priority of application No. 2010212 (GB), filed on Jul. 3, 2020.
Prior Publication US 2022/0004253 A1, Jan. 6, 2022
Int. Cl. G06F 3/01 (2006.01); G02B 27/00 (2006.01); G02B 27/01 (2006.01); G06T 13/40 (2011.01); G06T 19/00 (2011.01); G06T 19/20 (2011.01)
CPC G06F 3/013 (2013.01) [G02B 27/0093 (2013.01); G02B 27/017 (2013.01); G06T 13/40 (2013.01); G06T 19/006 (2013.01); G06T 19/20 (2013.01)] 20 Claims
OG exemplary drawing
 
1. A data processing apparatus, comprising:
avatar monitoring circuitry to receive gaze data for a first user associated with a first avatar in a virtual reality environment, the gaze data for the first user indicative of a gaze point for the first user with respect to the virtual reality environment, in which the avatar monitoring circuitry is configured to select one or more objects in the virtual reality environment in dependence upon the gaze data for the first user and to store first avatar information for the first avatar indicative of one or more of the selected objects;
input circuitry to receive gaze data for a second user indicative of a gaze point for the second user with respect to the virtual reality environment; and
processing circuitry to generate images for the virtual reality environment for display to the second user, in which the processing circuitry is configured to:
generate one or more of the images for the virtual reality environment to include the first avatar and select the first avatar in dependence upon whether the gaze point for the second user is within a predetermined distance of the first avatar in one or more of the images for the virtual reality environment; and
generate one or more of the images including the first avatar to include at least one graphical element indicative of the first avatar information in response to the selection of the first avatar by the gaze point for the second user,
wherein the first avatar information comprises one or more of:
identification information for the first avatar;
identification information for a selected object; and
object type information indicative of a type of the selected object.