US 11,893,705 B2
Reference image generation apparatus, display image generation apparatus, reference image generation method, and display image generation method
Yuki Karasawa, Tokyo (JP)
Assigned to SONY INTERACTIVE ENTERTAINMENT INC., Tokyo (JP)
Filed by Sony Interactive Entertainment Inc., Tokyo (JP)
Filed on Mar. 1, 2022, as Appl. No. 17/684,101.
Application 17/684,101 is a division of application No. 16/980,004, granted, now 11,308,577, previously published as PCT/JP2018/014475, filed on Apr. 4, 2018.
Prior Publication US 2022/0270205 A1, Aug. 25, 2022
Int. Cl. G06T 3/20 (2006.01); G06T 7/90 (2017.01)
CPC G06T 3/20 (2013.01) [G06T 7/90 (2017.01); G06T 2207/10024 (2013.01)] 10 Claims
OG exemplary drawing
 
1. An apparatus configured to display a virtual environment and comprising:
a memory;
a display; and
processing circuitry configured to:
store video data of the virtual environment, to include video data of a reference image representative of a picture of a space in the virtual environment that includes an object represented by a plurality of pixels,
wherein the video data of the reference image includes video data corresponding to a plurality of reference points of view in the virtual environment;
compare a point of view of a user to the plurality of reference points of view; and
display a picture of the virtual environment, including a picture of the object, from the point of view of the user;
wherein, based on the point of view of the user not being the same as any of the plurality of reference points of view, the picture of the object that is viewed from the point of view of the user is formed by an image averaging process that includes:
retrieving from the stored video data:
a first set of pixel-specific colors of the object as viewed from a first reference point of view of the plurality of reference points of view that is closest to the point of view of the user in a first direction, and
a second set of pixel-specific colors of the object as viewed from a second reference point of view of the plurality of reference points of view that is second closest to the point of view of the user in a first direction; and
setting pixel-specific colors of the object as viewed from the point of view of the user based on a weighted average of the first set of pixel-specific colors and the second set of pixel-specific colors,
wherein the weighted average is based on:
a first distance between the point of view of the user and the first reference point of view, and
a second distance between the point of view of the user and the second reference point of view,
wherein the first and second distances comprise first and second linear distances, and
wherein the weighted average of the first set of pixel-specific colors and the second set of pixel-specific colors comprises, for each pixel, a pixel-specific weighted color C=w1c1+w2c2, where
w1+w2=1,
c1=a pixel-specific color from the first reference point of view,
c2=a pixel-specific color from the second reference point of view,
w1=(1/Δa2)/sum,
w2=(1/Δb2)/sum,
sum=1/Δa2+1/Δb2,
Δa=the first distance, and
Δb=the first distance.