US 12,379,215 B2
Efficient vision-aided inertial navigation using a rolling-shutter camera with inaccurate timestamps
Stergios I. Roumeliotis, Los Altos Hills, CA (US); and Chao Guo, Los Altos, CA (US)
Assigned to Regents of the University of Minnesota, Minneapolis, MN (US)
Filed by Regents of the University of Minnesota, Minneapolis, MN (US)
Filed on Aug. 1, 2023, as Appl. No. 18/363,593.
Application 18/363,593 is a continuation of application No. 16/025,574, filed on Jul. 2, 2018, granted, now 11,719,542.
Application 16/025,574 is a continuation of application No. 14/733,468, filed on Jun. 8, 2015, granted, now 10,012,504, issued on Jul. 3, 2018.
Claims priority of provisional application 62/014,532, filed on Jun. 19, 2014.
Prior Publication US 2023/0408262 A1, Dec. 21, 2023
Int. Cl. G01C 21/16 (2006.01); G06T 7/277 (2017.01)
CPC G01C 21/1656 (2020.08) [G06T 7/277 (2017.01); G06T 2207/30241 (2013.01); G06T 2207/30244 (2013.01)] 19 Claims
OG exemplary drawing
 
1. A vision-aided inertial navigation system (VINS) comprising:
an image source configured to produce image data at a first set of time instances along a trajectory within a three-dimensional (3D) environment, wherein:
the image data captures feature observations within the 3D environment at each of the first set of time instances,
the image source comprises at least one sensor capable of capturing a plurality of rows of image data, and
a sensor of the at least one sensor is configured to capture the plurality of rows of image data row-by-row so that each row is captured at a different time instance than any of the first set of time instances;
an inertial measurement unit (IMU) configured to produce IMU data for the VINS along the trajectory at a second set of time instances that is misaligned in time with the first set of time instances, wherein the IMU data indicates a motion of the VINS along the trajectory; and
a processor is configured to:
compute poses for the image source as an extrapolation from poses for the IMU that are closest in time along the trajectory,
compute each of the poses for the image source by storing and updating a state vector having a sliding window of poses for the image source, wherein each of the poses for the image source correspond to a different time instance of the first set of time instances at which the image data was captured by the image source, and
in response to the image source producing the image data, insert a most recent pose computed for the IMU into the state vector as an image source pose.