US 12,444,148 B2
Camera calibration using depth sensor data
Mohamed Souiai, San Francisco, CA (US); Moshe Bouhnik, Holon (IL); and Ankur Gupta, Union City, CA (US)
Assigned to Magic Leap, Inc., Plantation, FL (US)
Appl. No. 18/688,588
Filed by Magic Leap, Inc., Plantation, FL (US)
PCT Filed Sep. 8, 2022, PCT No. PCT/US2022/076080
§ 371(c)(1), (2) Date Mar. 1, 2024,
PCT Pub. No. WO2023/039452, PCT Pub. Date Mar. 16, 2023.
Claims priority of provisional application 63/260,998, filed on Sep. 8, 2021.
Prior Publication US 2024/0371114 A1, Nov. 7, 2024
Int. Cl. G06T 19/00 (2011.01); G06T 7/80 (2017.01); G06T 19/20 (2011.01)
CPC G06T 19/006 (2013.01) [G06T 7/80 (2017.01); G06T 19/20 (2013.01); G06T 2207/10024 (2013.01); G06T 2207/10028 (2013.01); G06T 2219/2016 (2013.01)] 13 Claims
OG exemplary drawing
 
1. A method performed by one or more computing devices, the method comprising:
capturing, by the one or more computing devices, a frame of data that includes (i) depth data from a depth sensor of a device, the depth data indicating distances from the depth sensor to objects in an environment of the device, and (ii) image data from a camera of the device, the image data representing visible features the objects in the environment;
transforming, by the one or more computing devices and as transformed points, selected points from the depth data using camera calibration data for the camera, wherein the selected points are transformed to corresponding locations in a three-dimensional space that is based on the image data, and wherein the camera calibration data indicates a first translation and a first rotation between the camera and a reference position on the device;
projecting, by the one or more computing devices and as projected points, the transformed points from the three-dimensional space to two-dimensional image data from the camera;
generating, by the one or more computing devices, updated camera calibration data based on differences between (i) locations of the projected points and (ii) locations that features representing the selected points appear in the two-dimensional image data from the camera, wherein the updated camera calibration data indicates a second translation and a second rotation between the camera and the reference position; and
using, by the one or more computing devices, the updated camera calibration data in a simultaneous localization and mapping process that determines at least one of an update to a three-dimensional environment model for the environment and an estimated position of the device within the environment.