US 12,033,393 B2
3D object detection method using synergy of heterogeneous sensors for autonomous driving
Rodolfo Valiente Romero, Orlando, FL (US); Hyukseong Kwon, Thousand Oaks, CA (US); Rajan Bhattacharyya, Sherman Oaks, CA (US); Michael J. Daily, Thousand Oaks, CA (US); and Gavin D. Holland, Oak Park, CA (US)
Assigned to GM GLOBAL TECHNOLOGY OPERATIONS LLC, Detroit, MI (US)
Filed by GM Global Technology Operations LLC, Detroit, MI (US)
Filed on Sep. 28, 2021, as Appl. No. 17/487,835.
Prior Publication US 2023/0109712 A1, Apr. 13, 2023
Int. Cl. G06V 20/58 (2022.01); G01S 17/89 (2020.01); G06V 10/82 (2022.01); G06V 20/64 (2022.01)
CPC G06V 20/58 (2022.01) [G01S 17/89 (2013.01); G06V 10/82 (2022.01); G06V 20/64 (2022.01)] 14 Claims
OG exemplary drawing
 
1. A method for performing object detection during autonomous driving, comprising:
performing a 3D object detection in a 3D object detection segment;
uploading an output of multiple sensors in communication with the 3D object detection segment to multiple point clouds;
transferring point cloud data from the multiple point clouds to a Region Proposal Network (RPN);
independently performing 2D object detection in a 2D object detection segment in parallel with the 3D object detection in the 3D object detection segment;
taking a given input image and simultaneously learning box coordinates and class label probabilities in a 2D object detection network operating to treat object detection as a regression problem;
passing image output from a camera to an instance segmentation deep neural network (DNN) having an instance segmentation device wherein different instances of the object receive a different label; and
moving an instance product from the instance segmentation device to an instance mask detector wherein a segmentation device output is a binary mask for the regions.