CPC G06T 15/205 (2013.01) [G06T 15/06 (2013.01); G06T 15/80 (2013.01); G06T 2207/10028 (2013.01)] | 20 Claims |
1. A method, comprising:
receiving a plurality of input two-dimensional (2D) images corresponding to a plurality of views of an object and a request to display a three-dimensional (3D) scene that includes the object;
generating an output 2D image for a view of the 3D scene by applying a scene representation model to the input 2D images, wherein the scene representation model comprises:
a point cloud generation model configured to generate, based on the input 2D images, a neural point cloud representing the 3D scene;
a neural point volume rendering model configured to determine, for each pixel of the output 2D image and using the neural point cloud and a volume rendering process, a color value using a shading point color value and a density value of each shading point of a plurality of shading points, the shading point color value and the density value based on features of one or more neural points located a predefined proximity to the shading point; and
transmitting, responsive to the request, the output 2D image, wherein each pixel of the output 2D image includes the respective determined color value.
|