US 12,073,507 B2
Point-based neural radiance field for three dimensional scene representation
Zexiang Xu, San Jose, CA (US); Zhixin Shu, San Jose, CA (US); Sai Bi, San Jose, CA (US); Qiangeng Xu, Los Angeles, CA (US); Kalyan Sunkavalli, San Jose, CA (US); and Julien Philip, London (GB)
Assigned to Adobe Inc., San Jose, CA (US)
Filed by Adobe Inc., San Jose, CA (US)
Filed on Jul. 9, 2022, as Appl. No. 17/861,199.
Prior Publication US 2024/0013477 A1, Jan. 11, 2024
Int. Cl. G06T 15/20 (2011.01); G06T 15/06 (2011.01); G06T 15/80 (2011.01)
CPC G06T 15/205 (2013.01) [G06T 15/06 (2013.01); G06T 15/80 (2013.01); G06T 2207/10028 (2013.01)] 20 Claims
OG exemplary drawing
 
1. A method, comprising:
receiving a plurality of input two-dimensional (2D) images corresponding to a plurality of views of an object and a request to display a three-dimensional (3D) scene that includes the object;
generating an output 2D image for a view of the 3D scene by applying a scene representation model to the input 2D images, wherein the scene representation model comprises:
a point cloud generation model configured to generate, based on the input 2D images, a neural point cloud representing the 3D scene;
a neural point volume rendering model configured to determine, for each pixel of the output 2D image and using the neural point cloud and a volume rendering process, a color value using a shading point color value and a density value of each shading point of a plurality of shading points, the shading point color value and the density value based on features of one or more neural points located a predefined proximity to the shading point; and
transmitting, responsive to the request, the output 2D image, wherein each pixel of the output 2D image includes the respective determined color value.