US 11,677,920 B2
Capturing and aligning panoramic image and depth data
Kyle Simek, San Jose, CA (US); David Gausebeck, Mountain View, CA (US); and Matthew Tschudy Bell, Palo Alto, CA (US)
Assigned to Matterport, Inc., Sunnyvale, CA (US)
Filed by Matterport, Inc., Sunnyvale, CA (US)
Filed on Sep. 3, 2019, as Appl. No. 16/559,135.
Application 14/070,426 is a division of application No. 13/776,688, filed on Feb. 25, 2013, granted, now 9,324,190, issued on Apr. 26, 2016.
Application 16/559,135 is a continuation of application No. 15/417,162, filed on Jan. 26, 2017, granted, now 10,848,731.
Application 15/417,162 is a continuation in part of application No. 14/070,426, filed on Nov. 1, 2013, granted, now 10,482,679.
Claims priority of provisional application 61/603,221, filed on Feb. 24, 2012.
Prior Publication US 2019/0394441 A1, Dec. 26, 2019
Int. Cl. H04N 13/106 (2018.01); H04N 5/265 (2006.01); H04N 13/254 (2018.01); H04N 13/239 (2018.01); H04N 13/232 (2018.01); H04N 13/271 (2018.01); H04N 23/45 (2023.01); H04N 23/698 (2023.01)
CPC H04N 13/106 (2018.05) [H04N 5/265 (2013.01); H04N 13/232 (2018.05); H04N 13/239 (2018.05); H04N 13/254 (2018.05); H04N 13/271 (2018.05); H04N 23/45 (2023.01); G06T 2207/20221 (2013.01); H04N 23/698 (2023.01)] 21 Claims
OG exemplary drawing
 
1. A device comprising:
a housing including:
at least one camera having a fisheye camera lens configured to capture 2D image data of an environment from a fixed location;
at least one depth sensor device including at least one light imaging detection and ranging (LiDAR) device, the at least one depth sensor device being configured to capture 3D depth data of the environment;
a horizontal rotatable mount configured to enable the fisheye camera lens of the at least one camera to move in a horizontal x axis relative to the device, the at least one camera being capable of capturing a plurality of images with mutually overlapping fields of view at different viewpoints;
at least one processor configured to map the 2D image data from the at least one camera and the 3D depth data from the at least one depth sensor device to a common spatial 3D coordinate space based on known capture positions and orientations of the at least one camera and the at least one depth sensor device to facilitate associating 3D coordinates with respective visual features included in the 2D image data relative to the common spatial 3D coordinate space; and
a 3D model generation component configured to generate a 3D model of the environment from the 2D image data from the at least one camera and 3D depth data from the at least one depth sensor.