US 11,885,886 B2
Systems and methods for camera-LiDAR fused object detection with LiDAR-to-image detection matching
Basel Alghanem, Pittsburgh, PA (US); Arsenii Saranin, Pittsburgh, PA (US); G. Peter K. Carr, Allison Park, PA (US); and Kevin Lee Wyffels, Livonia, MI (US)
Assigned to Ford Global Technologies, LLC, Dearborn, MI (US)
Filed by FORD GLOBAL TECHNOLOGIES, LLC, Dearborn, MI (US)
Filed on Oct. 23, 2020, as Appl. No. 17/078,543.
Prior Publication US 2022/0128701 A1, Apr. 28, 2022
Int. Cl. G01S 17/931 (2020.01); G01S 7/497 (2006.01); G01S 17/894 (2020.01)
CPC G01S 17/931 (2020.01) [G01S 7/497 (2013.01); G01S 17/894 (2020.01)] 23 Claims
OG exemplary drawing
 
1. A method for controlling an autonomous vehicle, comprising:
obtaining, by a computing device, a LiDAR dataset generated by a LiDAR system of the autonomous vehicle;
using, by the computing device, the LiDAR dataset and at least one image to detect an object that is in proximity to the autonomous vehicle, the object being detected by
matching points of the LiDAR dataset to pixels in the at least one image, and
detecting the object in a point cloud defined by the LiDAR dataset based on the matching;
using, by the computing device, the object detection to facilitate at least one autonomous driving operation,
wherein the matching comprises determining a probability distribution of pixels of the at least one image to which a point of the LiDAR dataset may project taking into account a projection uncertainty in view of camera calibration uncertainties, and the probability distribution is determined by computing a probability distribution function over image space coordinates for a pixel to which a point of the LiDAR dataset would probably project and computed in accordance with following mathematical equation

OG Complex Work Unit Math
where xi and yi represent image space coordinates for a pixel, and X, Y and Z represent LiDAR space coordinates for a point of the LiDAR dataset.