| CPC G06T 19/006 (2013.01) [G06T 7/80 (2017.01); G06T 19/20 (2013.01); G06T 2207/10024 (2013.01); G06T 2207/10028 (2013.01); G06T 2219/2016 (2013.01)] | 13 Claims |

|
1. A method performed by one or more computing devices, the method comprising:
capturing, by the one or more computing devices, a frame of data that includes (i) depth data from a depth sensor of a device, the depth data indicating distances from the depth sensor to objects in an environment of the device, and (ii) image data from a camera of the device, the image data representing visible features the objects in the environment;
transforming, by the one or more computing devices and as transformed points, selected points from the depth data using camera calibration data for the camera, wherein the selected points are transformed to corresponding locations in a three-dimensional space that is based on the image data, and wherein the camera calibration data indicates a first translation and a first rotation between the camera and a reference position on the device;
projecting, by the one or more computing devices and as projected points, the transformed points from the three-dimensional space to two-dimensional image data from the camera;
generating, by the one or more computing devices, updated camera calibration data based on differences between (i) locations of the projected points and (ii) locations that features representing the selected points appear in the two-dimensional image data from the camera, wherein the updated camera calibration data indicates a second translation and a second rotation between the camera and the reference position; and
using, by the one or more computing devices, the updated camera calibration data in a simultaneous localization and mapping process that determines at least one of an update to a three-dimensional environment model for the environment and an estimated position of the device within the environment.
|