US 12,277,271 B2
Method for rendering video images in VR scenes
Kun Wang, Jiangsu (CN); Jichun Li, Jiangsu (CN); Mengze Wang, Jiangsu (CN); and Youxin Chen, Jiangsu (CN)
Assigned to Samsung Electronics Co., Ltd., Suwon-si (KR)
Filed by Samsung Electronics Co., Ltd., Suwon-si (KR)
Filed on Jun. 5, 2024, as Appl. No. 18/734,497.
Application 18/734,497 is a continuation of application No. PCT/IB2024/055190, filed on May 29, 2024.
Claims priority of application No. 202311048685.2 (CN), filed on Aug. 18, 2023.
Prior Publication US 2025/0060814 A1, Feb. 20, 2025
Int. Cl. G06F 3/01 (2006.01); G06T 7/11 (2017.01); G06T 15/20 (2011.01)
CPC G06F 3/013 (2013.01) [G06T 7/11 (2017.01); G06T 15/20 (2013.01); G06T 2207/10016 (2013.01); G06T 2207/10048 (2013.01); G06T 2207/20081 (2013.01); G06T 2207/30201 (2013.01); G06T 2207/30241 (2013.01); G06T 2207/30268 (2013.01)] 20 Claims
OG exemplary drawing
 
1. A method for rendering video images in virtual reality (VR) scenes, the method comprising:
providing a video image at a current time point;
dividing the video image at the current time point into a plurality of sub-regions;
inputting image feature information of the sub-regions and acquired user viewpoint feature information into a trained attention model for processing to obtain attention coefficients of the sub-regions indicating probability values at which user viewpoints at a next time point fall into the sub-regions;
rendering the sub-regions based on the attention coefficients of the sub-regions to obtain a rendered video image at the current time point;
inputting the attention coefficients of the sub-regions and the image feature information of the sub-regions into a trained user eyes trajectory prediction model for processing;
obtaining user eyes trajectory information in a current time period;
dividing, for video images at subsequent time points within the current time period, the video images at the subsequent time points into a plurality of sub-regions, calculating attention coefficients of the sub-regions in a video image at each of the subsequent time points within the current time period respectively based on the user eyes trajectory information in the current time period; and
rendering the corresponding sub-regions based on the attention coefficients of the sub-regions to obtain a rendered video image at each of the subsequent time points.