US 12,406,482 B2
Sensor fusion-based object detection system and method for objects with a complex shape or large-size
Soomok Lee, Gyeonggi-do (KR)
Assigned to ThorDrive Co., Ltd., Seoul (KR)
Filed by ThorDrive Co., Ltd., Seoul (KR)
Filed on May 12, 2022, as Appl. No. 17/743,290.
Claims priority of application No. 10-2021-0062288 (KR), filed on May 13, 2021.
Prior Publication US 2024/0412498 A1, Dec. 12, 2024
Int. Cl. G06V 10/86 (2022.01); G01S 17/86 (2020.01); G01S 17/894 (2020.01); G06V 10/26 (2022.01); G06V 10/762 (2022.01); G06V 10/80 (2022.01); G06V 10/84 (2022.01)
CPC G06V 10/811 (2022.01) [G01S 17/86 (2020.01); G01S 17/894 (2020.01); G06V 10/26 (2022.01); G06V 10/762 (2022.01); G06V 10/84 (2022.01)] 18 Claims
OG exemplary drawing
 
1. A sensor fusion object detection system capable of solving the over-segmentation problem that may occur in recognizing an object having a complex shape or a large size, the system comprising:
a LiDAR detector that derives an object-instance-wise segmentation information including point segments for each object obtained by clustering a point cloud from one or more LiDAR sensors;
a camera detector that derives an object recognition information for each object from an image obtained from a camera sensor; and
a fusion recognition unit that derives object point groups segmented for each object by using the object recognition information and the object segmentation information,
wherein the fusion recognition unit further includes an optimization unit that derives the object point groups by using a graph-based probability optimization technique based on a first probability as to whether each point segment corresponds to a particular object and a second probability as to whether two different point segments correspond to the same object, calculated based on the object segmentation information and the object recognition information,
wherein the object recognition information includes object region information and object label information for each recognized object, the fusion recognition unit further includes a camera information association unit that calculates a third probability, which is a probability that each point segment corresponds to the object label information, based on a ratio at which points corresponding to each point segment of the object segmentation information match the corresponding object region information, and the optimization unit calculates the first probability based on the third probability.