US 11,734,848 B2
Pseudo lidar
Ofer Springer, Jerusalem (IL); David Neuhof, Jerusalem (IL); Jeffrey Moskowitz, Tel Aviv (IL); Gal Topel, Jerusalem (IL); Nadav Shaag, Jerusalem (IL); Yotam Stern, Jerusalem (IL); Roy Lotan, Caesarea (IL); Shahar Harouche, Raanana (IL); and Daniel Einy, Gilboa (IL)
Assigned to MOBILEYE VISION TECHNOLOGIES LTD., Jerusalem (IL)
Filed by MOBILEYE VISION TECHNOLOGIES LTD., Jerusalem (IL)
Filed on Jun. 29, 2022, as Appl. No. 17/809,641.
Application 17/809,641 is a continuation of application No. PCT/US2020/067753, filed on Dec. 31, 2020.
Claims priority of provisional application 63/082,619, filed on Sep. 24, 2020.
Claims priority of provisional application 62/957,000, filed on Jan. 3, 2020.
Prior Publication US 2022/0327719 A1, Oct. 13, 2022
Int. Cl. G01S 17/86 (2020.01); G01C 21/30 (2006.01); G06T 7/55 (2017.01); G01S 17/931 (2020.01); B60W 60/00 (2020.01); G01B 11/22 (2006.01); G01S 17/89 (2020.01); B60W 10/04 (2006.01); B60W 10/18 (2012.01); B60W 10/20 (2006.01); B60W 30/09 (2012.01); G01C 21/00 (2006.01); G01C 21/16 (2006.01); G01S 7/481 (2006.01); G01S 17/42 (2006.01); G01S 17/58 (2006.01); H04N 23/90 (2023.01); H04N 23/698 (2023.01)
CPC G06T 7/55 (2017.01) [B60W 10/04 (2013.01); B60W 10/18 (2013.01); B60W 10/20 (2013.01); B60W 30/09 (2013.01); B60W 60/001 (2020.02); G01B 11/22 (2013.01); G01C 21/1652 (2020.08); G01C 21/1656 (2020.08); G01C 21/30 (2013.01); G01C 21/3885 (2020.08); G01S 7/4817 (2013.01); G01S 17/42 (2013.01); G01S 17/58 (2013.01); G01S 17/86 (2020.01); G01S 17/89 (2013.01); G01S 17/931 (2020.01); H04N 23/698 (2023.01); H04N 23/90 (2023.01); B60W 2420/42 (2013.01); B60W 2420/52 (2013.01); B60W 2554/20 (2020.02); B60W 2554/4042 (2020.02); B60W 2554/802 (2020.02); B60W 2556/45 (2020.02); B60W 2720/10 (2013.01); B60W 2720/24 (2013.01); G06T 2207/10028 (2013.01); G06T 2207/20081 (2013.01); G06T 2207/20084 (2013.01); G06T 2207/30252 (2013.01)] 21 Claims
OG exemplary drawing
 
1. A navigation system for a host vehicle, the navigation system comprising:
at least one processor programmed to:
receive from a center camera onboard the host vehicle at least one captured center image including a representation of at least a portion of an environment of the host vehicle, receive from a left surround camera onboard the host vehicle at least one captured left surround image including a representation of at least a portion of the environment of the host vehicle, and receive from a right surround camera onboard the host vehicle at least one captured right surround image including a representation of at least a portion of the environment of the host vehicle, wherein a field of view of the center camera at least partially overlaps with both a field of view of the left surround camera and a field of view of the right surround camera;
provide the at least one captured center image, the at least one captured left surround image, and the at least one captured right surround image to an analysis module configured to generate an output relative to the at least one captured center image based on analysis of the at least one captured center image, the at least one captured left surround image, and the at least one captured right surround image, wherein the generated output includes per-pixel depth information for at least one region of the captured center image; and
cause at least one navigational action by the host vehicle based on the generated output including the per-pixel depth information for the at least one region of the captured center image,
wherein the analysis module includes at least one trained model trained based on training data including a combination of a plurality of images captured by cameras with at least partially overlapping fields and LIDAR point cloud information corresponding with at least some of the plurality of images.