| CPC G06T 7/344 (2017.01) [G01S 17/89 (2013.01); G06T 3/14 (2024.01); G06T 2207/10024 (2013.01); G06T 2207/10028 (2013.01)] | 11 Claims |

|
1. A system comprising:
a three-dimensional (3D) scanner that captures a 3D point cloud that comprises a plurality of 3D coordinates corresponding to one or more objects scanned in a surrounding environment;
a camera that captures an image of the surrounding environment, wherein the image captures color information of the surrounding environment including the one or more objects, the image being an ultrawide-angle image with an angular field of view of at least 180°; and
one or more processors operably coupled to the 3D scanner and the camera that register a first scan that is captured from a first position with a second scan that is captured from a second position, wherein the first scan comprises a first 3D point cloud captured via the 3D scanner and a first ultrawide-angle image captured via the camera, and the second scan comprises a second 3D point cloud captured via the 3D scanner and a second ultrawide-angle image captured via the camera, wherein the first scan and the second scan capture at least one overlapping portion, and wherein registering the first scan and the second scan comprises:
partitioning each of the ultrawide-angle images into a plurality of regions comprising a first region and a second region, the plurality of regions being concentric rings around a center of the ultrawide-angle image;
for a set of 3D coordinates (X, Y, Z) from each 3D point cloud from the first 3D point cloud and the second 3D point cloud, computing a corresponding set of 2D coordinates (x, y) from the corresponding ultrawide-angle image;
in response to the 2D coordinates (x, y) being in the first region of the ultrawide-angle image, computing a first set of correction factors (Δx1, Δy1) associated with the first region, and are based on an estimated lens distortion of the camera in the first region, and adjusting the 2D coordinates using the first set of correction factors;
in response to the 2D coordinates being in the second region of the ultrawide-angle image, computing a second set of correction factors (Δx2, Δy2) associated with the second region, and are based on an estimated lens distortion of the camera in the second region, and adjusting the 2D coordinates using the second set of correction factors;
comparing the first ultrawide-angle image and the second ultrawide-angle image using the adjusted 2D coordinates to determine matching features in the first ultrawide-angle image and the second ultrawide-angle image;
registering the first ultrawide-angle image and the second ultrawide-angle image based on the matching features; and
aligning the first 3D point cloud and the second 3D point cloud based on the matching features.
|