| CPC G06T 15/50 (2013.01) [G06T 7/194 (2017.01); G06T 7/55 (2017.01); G06T 7/60 (2013.01); G06T 7/80 (2017.01); G06T 15/06 (2013.01); G06T 2207/10028 (2013.01); G06T 2207/20081 (2013.01); G06T 2207/20084 (2013.01); G06T 2210/12 (2013.01)] | 18 Claims |

|
1. A method for generating a three-dimensional (3D) representation of an object from two-dimensional (2D) images including the object, the method comprising:
determining camera parameters of the images including the object, the images captured under different conditions;
estimating a geometry of the object and refining the determined camera parameters using the images including the object and corresponding foreground masks defining a region of the object within a corresponding one of the images, the estimated geometry including density information;
producing surface normals of the object using the estimated geometry, wherein producing the surface normals comprises, for each image:
calculating a bounding box of the object;
discretizing the bounding box a density value grid;
extracting a density value of each grid center in the density value grid;
remapping the extracted density value in the density value grid using a mapping function based on a controllable parameter to adjust between smooth predictions including less noise and sharper predictions including more noise;
estimating a gradient of the remapped extracted density values by applying a three-dimensional (3D) convolution to the remapped extracted density values in the density value grid; and
adjusting the estimated gradient to produce the surface normals, wherein the adjusted surface normals are no larger than 1; and
inferring surface material properties and per-image lighting conditions based on the estimated geometry and surface normals using ray sampling to obtain the 3D representation.
|