US 12,354,370 B2
Object detection device, object detection system, mobile object, and object detection method
Kazuyuki Ota, Ueda (JP); Kazumasa Akimoto, Sagamihara (JP); Kenji Kono, Yokohama (JP); Junya Kishimoto, Yokohama (JP); and Fuko Takano, Yokohama (JP)
Assigned to KYOCERA Corporation, Kyoto (JP)
Appl. No. 17/753,512
Filed by KYOCERA Corporation, Kyoto (JP)
PCT Filed Sep. 2, 2020, PCT No. PCT/JP2020/033241
§ 371(c)(1), (2) Date Mar. 4, 2022,
PCT Pub. No. WO2021/045092, PCT Pub. Date Mar. 11, 2021.
Claims priority of application No. 2019-162349 (JP), filed on Sep. 5, 2019.
Prior Publication US 2022/0415056 A1, Dec. 29, 2022
Int. Cl. G06V 20/58 (2022.01); G06T 7/593 (2017.01); G06T 7/60 (2017.01); G06T 7/70 (2017.01); G06V 10/12 (2022.01); H04N 13/239 (2018.01); H04N 13/271 (2018.01)
CPC G06V 20/58 (2022.01) [G06T 7/593 (2017.01); G06T 7/60 (2013.01); G06T 7/70 (2017.01); G06V 10/12 (2022.01); H04N 13/239 (2018.05); H04N 13/271 (2018.05); G06T 2207/10012 (2013.01); G06T 2207/30261 (2013.01)] 11 Claims
OG exemplary drawing
 
1. An object detection device comprising:
a processor configured to execute
a first process for estimating a shape of a road surface in a real space based on a first disparity map, the first disparity map being generated based on an output of a stereo camera that captures an image including the road surface and being a map in which a disparity obtained from the output of the stereo camera is associated with two-dimensional coordinates formed by a first direction corresponding to a horizontal direction of the image captured by the stereo camera and a second direction intersecting the first direction,
a second process for removing from the first disparity map the disparity for which a height from the road surface in the real space corresponds to a predetermined range based on the estimated shape of the road surface to generate a second disparity map,
a third process for generating, for each of coordinate ranges in the first direction of the second disparity map, a histogram representing the numbers of occurrences of respective disparities based on the estimated shape of the road surface and the second disparity map, and determining disparities as object disparities when the number of occurrences of each of the disparities exceeds a predetermined threshold corresponding to the disparity, and
a fourth process for converting information on the object disparities for the respective coordinate ranges in the first direction into points in the real space and extracting a group of points based on a distribution of the points to detect the object,
wherein the processor is configured to, in the first process, estimate the shape of the road surface by sequentially extracting candidate road surface disparities within a predetermined margin range from a short-distance side to a long-distance side as viewed from the stereo camera and estimating the road surface disparities based on the candidate road surface disparities, and
wherein the processor is configured to, in the third process, calculate height information based on a distribution of disparity pixels having the same object disparities on the second disparity map of the determined object disparities in the second direction and the estimated shape of the road surface.