US 12,080,013 B2
Multi-view depth estimation leveraging offline structure-from-motion
Jiexiong Tang, Stockholm (SE); Rares Andrei Ambrus, San Francisco, CA (US); Sudeep Pillai, Santa Clara, CA (US); Vitor Guizilini, Santa Clara, CA (US); and Adrien David Gaidon, Mountain View, CA (US)
Assigned to TOYOTA RESEARCH INSTITUTE, INC., Los Altos, CA (US)
Filed by TOYOTA RESEARCH INSTITUTE, INC., Los Altos, CA (US)
Filed on Jul. 6, 2021, as Appl. No. 17/368,703.
Claims priority of provisional application 63/048,366, filed on Jul. 6, 2020.
Prior Publication US 2022/0005217 A1, Jan. 6, 2022
Int. Cl. G06T 7/593 (2017.01); G06T 7/536 (2017.01); G06T 7/70 (2017.01)
CPC G06T 7/596 (2017.01) [G06T 7/536 (2017.01); G06T 7/70 (2017.01); G06T 2207/10028 (2013.01); G06T 2207/30252 (2013.01)] 20 Claims
OG exemplary drawing
 
1. A method for estimating depth of a scene:
selecting an image of the scene from a sequence of images of the scene captured via an in-vehicle sensor of a first agent;
identifying a plurality of previously captured images of the scene;
selecting a set of images from the plurality of previously captured images based on each image of the set of images satisfying depth criteria; and
estimating the depth of the scene based on the selected image and the selected set of images.