US 11,783,593 B2
Monocular depth supervision from 3D bounding boxes
Vitor Guizilini, Santa Clara, CA (US); and Adrien David Gaidon, Mountain View, CA (US)
Assigned to TOYOTA RESEARCH INSTITUTE, INC., Los Altos, CA (US)
Filed by TOYOTA RESEARCH INSTITUTE, INC., Los Altos, CA (US)
Filed on Jun. 2, 2022, as Appl. No. 17/830,918.
Application 17/830,918 is a continuation of application No. 16/909,907, filed on Jun. 23, 2020, granted, now 11,398,095.
Prior Publication US 2022/0292837 A1, Sep. 15, 2022
This patent is subject to a terminal disclaimer.
Int. Cl. G06V 20/56 (2022.01); G06T 7/50 (2017.01); G06V 40/10 (2022.01); G06V 10/75 (2022.01); G06F 18/214 (2023.01)
CPC G06V 20/56 (2022.01) [G06F 18/214 (2023.01); G06T 7/50 (2017.01); G06V 10/751 (2022.01); G06V 40/10 (2022.01); G06T 2207/10028 (2013.01)] 20 Claims
OG exemplary drawing
 
1. A method for navigating a vehicle through an environment, comprising:
assigning a first weight to each pixel associated with a dynamic object in a two-dimensional (2D) image of the environment;
assigning a second weight to each pixel associated with a static object in the 2D image, the first weight being greater than the second weight;
generating a dynamic object depth estimate for the dynamic object, the dynamic object depth estimate being associated with a first accuracy that is based on the first weight;
generating a static object depth estimate for the static object, the static object depth estimate being associated with a second accuracy that is based on the second weight, the first accuracy of the dynamic object depth estimate being greater than the second accuracy of the static object depth estimate;
generating a three-dimensional (3D) estimate of the environment based on the dynamic object depth estimate and the static object depth estimate; and
controlling an action of the vehicle based on the 3D estimate of the environment.