US 12,265,902 B2
Method and system for automatically labeling radar data
Simon Tobias Isele, Stuttgart (DE); and Marcel Peter Schilling, Karlsruhe (DE)
Assigned to Dr. Ing. h. c. F. Porsche AG, Stuttgart (DE)
Filed by Dr. Ing. h.c. F. Porsche Aktiengesellschaft, Stuttgart (DE)
Filed on Jul. 28, 2021, as Appl. No. 17/386,598.
Claims priority of application No. 10 2020 123 920.3 (DE), filed on Sep. 15, 2020.
Prior Publication US 2022/0083841 A1, Mar. 17, 2022
Int. Cl. G06T 7/11 (2017.01); G01S 17/86 (2020.01); G06N 3/045 (2023.01); G06N 3/047 (2023.01); G06V 20/70 (2022.01)
CPC G06N 3/045 (2023.01) [G01S 17/86 (2020.01); G06N 3/047 (2023.01); G06V 20/70 (2022.01)] 8 Claims
OG exemplary drawing
 
5. A system for automatically labeling sensor data of a scene, wherein a vehicle comprises a radar detector and optical sensors that include at least one camera and a lidar as optical sensors, the radar detector, the camera and the lidar each having at least a portion of surroundings of the vehicle as a respective field of view and the respective fields of view at least partly overlapping in a coverage region, wherein, in a succession of time steps, a set of three-dimensional radar points is provided by the radar detector, a set of two-dimensional image data is provided by the at least one camera, and a set of three-dimensional lidar data is provided by the lidar at each time step t, and wherein a respective plausibility label is automatically assignable to a respective radar point at each time step, the system being configured
to correct the image data for a straight ahead view of the scene by image rectification and a subsequent perspective transformation,
to calibrate a camera-based depth estimation generated by a neural network by means of the lidar data in a coverage region of the fields of view of camera and lidar,
to calculate a three-dimensional point cloud representation from two-dimensional image information by means of the camera-based depth estimation,
to associate the three-dimensional point cloud representation with the radar points and associate the lidar data with the radar points by way of an application of a k-closest neighbor algorithm, as a result of which, depending on the coverage region of the fields of view, a radar/lidar plausibility and a radar/camera plausibility arise taking account of Euclidean distances and uncertainties,
to merge the radar/lidar plausibility and the radar/camera plausibility to form a combined optics-based plausibility,
in parallel therewith, to assign a radar/tracking plausibility to each radar point by means of tracking, with odometry data of the vehicle being taken into account,
to combine the optics-based plausibility and the radar/tracking plausibility and subsequently assign a binary plausibility label, the latter characterizing whether the respective radar detection describes an artifact or a plausible reflection.