US 12,260,650 B2
Close-in sensing camera system
Kimberly Toth, Sunnyvale, CA (US); Jeremy Dittmer, Mountain View, CA (US); Giulia Guidi, Mountain View, CA (US); and Peter Avram, Ann Arbor, MI (US)
Assigned to Waymo LLC, Mountain View, CA (US)
Filed by WAYMO LLC, Mountain View, CA (US)
Filed on Nov. 7, 2023, as Appl. No. 18/503,422.
Application 18/503,422 is a continuation of application No. 18/081,260, filed on Dec. 14, 2022, granted, now 11,887,378.
Application 18/081,260 is a continuation of application No. 16/737,263, filed on Jan. 8, 2020, granted, now 11,557,127, issued on Jan. 17, 2023.
Claims priority of provisional application 62/954,930, filed on Dec. 30, 2019.
Prior Publication US 2024/0144697 A1, May 2, 2024
This patent is subject to a terminal disclaimer.
Int. Cl. G06V 20/58 (2022.01); B60R 11/04 (2006.01); G01S 17/894 (2020.01); G01S 17/931 (2020.01); H04N 23/80 (2023.01)
CPC G06V 20/58 (2022.01) [B60R 11/04 (2013.01); G01S 17/894 (2020.01); G01S 17/931 (2020.01); H04N 23/80 (2023.01); B60R 2300/301 (2013.01); B60R 2300/802 (2013.01); B60R 2300/8093 (2013.01)] 20 Claims
OG exemplary drawing
 
1. A method comprising:
receiving, by one or more processors of a vehicle configured to operate in an autonomous driving mode, lidar data from a lidar sensor arranged along an external sensing assembly of the vehicle and having a lidar field of view of a region of an external environment around the vehicle, the lidar field of view including an occlusion region within an immediate vicinity of the vehicle;
receiving, by one or more processors, captured imagery from an image sensor positioned relative to the lidar sensor along the external sensing assembly to have an image field of view that is within the region of the external environment, the image field of view at least partly overlapping with the lidar field of view and encompassing at least a portion of the occlusion region of the lidar field of view;
detecting, based on at least one of the lidar data or the captured imagery, an object in the external environment;
classifying the object based on the captured imagery, the captured imagery including at least the portion of the occlusion region of the lidar field of view; and
determining whether to cause one or more systems of the vehicle to perform a driving action based on classifying the object.