| CPC G06T 19/20 (2013.01) [G01S 17/894 (2020.01); G06T 5/50 (2013.01); G06T 7/90 (2017.01); G06T 2207/10024 (2013.01); G06T 2207/10028 (2013.01); G06T 2207/20021 (2013.01); G06T 2207/20081 (2013.01); G06T 2207/20084 (2013.01); G06T 2210/56 (2013.01); G06T 2219/2012 (2013.01)] | 15 Claims |

|
1. A system comprising:
a three-dimensional (3D) scanner;
a camera with a viewpoint that is different from a viewpoint of the 3D scanner; and
at least one processor coupled with the 3D scanner and the camera, the at least one processor configured to:
access a point cloud captured by the 3D scanner, the point cloud comprises depth values of points in a surrounding environment;
access a 2D image captured by the camera, the 2D image comprises a plurality of pixels representing color information of the points in the surrounding environment;
generate a 3D scene by mapping the point cloud with the 2D image;
receive an input that selects, from the 3D scene, a portion to be colorized synthetically;
colorize the one or more points in the selected portion in the 3D scene, the colorizing comprising:
generating a reflectance image based on an intensity image of the point cloud;
generating an occlusion mask that identifies the selected portion in the reflectance image; and
estimate, using a trained machine learning model, a color for each of the one or more points in the selected portion based on the reflectance image, the occlusion mask, and the 2D image; and
update the 3D scene by using the estimated colors from the trained machine learning model to colorize the selected portion.
|