| CPC G06T 5/50 (2013.01) [G06T 3/18 (2024.01); G06T 5/70 (2024.01); G06T 5/80 (2024.01); G06T 7/55 (2017.01); G06T 15/20 (2013.01); G06T 2207/10016 (2013.01); G06T 2207/10024 (2013.01); G06T 2207/10028 (2013.01); G06T 2207/20016 (2013.01); G06T 2207/20084 (2013.01); G06T 2207/20221 (2013.01); G06T 2207/30201 (2013.01)] | 18 Claims |

|
1. A computer-implemented method comprising:
receiving a plurality of depth images associated with a target subject in at least one of a plurality of input images;
receiving a plurality of view parameters for generating a virtual view of the target subject;
generating a plurality of warped images based on the plurality of input images, the plurality of view parameters, and at least one of the plurality of depth images;
in response to providing the plurality of depth images, the plurality of view parameters, and the plurality of warped images to a neural network, receiving, from the neural network, at least one blending weight for assigning color to pixels of the virtual view of the target subject;
generating, based on the at least one blending weight and the virtual view, a synthesized image according to the plurality of view parameters; and
correcting for detected occlusions in the synthesized image based on a difference in depth between a geometrically fused model and a depth observed in the plurality of depth images.
|