US 12,456,246 B2
Method for labelling an epipolar-projected 3D image
Lucien Garcia, Toulouse (FR); Thomas Meneyrol, Toulouse (FR); and Spencer Danne, Toulouse (FR)
Assigned to CONTINENTAL AUTONOMOUS MOBILITY GERMANY GMBH, Ingolstadt (DE)
Appl. No. 18/575,423
Filed by Continental Autonomous Mobility Germany GmbH, Ingolstadt (DE)
PCT Filed Jul. 4, 2022, PCT No. PCT/EP2022/068379
§ 371(c)(1), (2) Date Dec. 29, 2023,
PCT Pub. No. WO2023/280745, PCT Pub. Date Jan. 12, 2023.
Claims priority of application No. 2107380 (FR), filed on Jul. 8, 2021.
Prior Publication US 2024/0312105 A1, Sep. 19, 2024
Int. Cl. G06T 15/00 (2011.01); G06T 7/50 (2017.01); G06T 7/70 (2017.01); G06V 10/25 (2022.01)
CPC G06T 15/00 (2013.01) [G06T 7/50 (2017.01); G06T 7/70 (2017.01); G06V 10/25 (2022.01)] 9 Claims
OG exemplary drawing
 
1. A method for labeling a 3D image of a scene acquired by a 3D sensor comprising identifying at least one region of interest in the 3D image, the method being implemented by a computer and comprising:
receiving:
a 2D image of the same scene, acquired by a camera,
coordinates, in the 2D image, of a set of pixels delineating the region of interest,
coordinates, in the 2D image, of a reference point belonging to the region of interest, and
data relating to the relative position and relative orientation of the camera with respect to the 3D sensor,
determining the depth of the reference point in a coordinate system associated with the camera, said step comprising:
based on the coordinates of the reference point in the 2D image, determining the two-dimensional coordinates of a plurality of first points in the 3D image, each first point corresponding to a possible position of the reference point in the 3D image,
obtaining, for each first point, a third depth coordinate with respect to the 3D sensor,
for each first point of the 3D image, obtaining the coordinates of the corresponding point in the 2D image, based on the depth coordinate of the first point,
selecting, in the 2D image, the first point closest to the reference point, and,
assigning, to the reference point, a depth corresponding to the depth of the first selected point,
assigning, to the pixels delineating the region of interest in the 2D image, a depth corresponding to the depth assigned to the reference point, and
computing the coordinates, in the 3D image, of the pixels delineating the region of interest, based on the coordinates of the pixels delineating the region of interest in the 2D image, on the depth assigned to the pixels delineating the region of interest and on the data relating to the relative position and relative orientation of the camera with respect to the 3D sensor.