CPC G06F 3/013 (2013.01) [G06T 7/11 (2017.01); G06T 15/20 (2013.01); G06T 2207/10016 (2013.01); G06T 2207/10048 (2013.01); G06T 2207/20081 (2013.01); G06T 2207/30201 (2013.01); G06T 2207/30241 (2013.01); G06T 2207/30268 (2013.01)] | 20 Claims |
1. A method for rendering video images in virtual reality (VR) scenes, the method comprising:
providing a video image at a current time point;
dividing the video image at the current time point into a plurality of sub-regions;
inputting image feature information of the sub-regions and acquired user viewpoint feature information into a trained attention model for processing to obtain attention coefficients of the sub-regions indicating probability values at which user viewpoints at a next time point fall into the sub-regions;
rendering the sub-regions based on the attention coefficients of the sub-regions to obtain a rendered video image at the current time point;
inputting the attention coefficients of the sub-regions and the image feature information of the sub-regions into a trained user eyes trajectory prediction model for processing;
obtaining user eyes trajectory information in a current time period;
dividing, for video images at subsequent time points within the current time period, the video images at the subsequent time points into a plurality of sub-regions, calculating attention coefficients of the sub-regions in a video image at each of the subsequent time points within the current time period respectively based on the user eyes trajectory information in the current time period; and
rendering the corresponding sub-regions based on the attention coefficients of the sub-regions to obtain a rendered video image at each of the subsequent time points.
|