US 11,789,455 B2
Control of autonomous vehicle based on fusion of pose information and visual data
Wendong Ding, Beijing (CN); Xiaofei Rui, Bejing (CN); Gang Wang, Bejing (CN); and Shiyu Song, Sunnyvale, CA (US)
Assigned to Beijing Baidu Netcom Science Technology Co., Ltd., Beijing (CN); and Apollo Intelligent Driving Technology (Beijing) Co., Ltd., Beijing (CN)
Filed by Apollo Intelligent Driving Technology (Beijing) Co., Ltd., Beijing (CN)
Filed on Dec. 29, 2020, as Appl. No. 17/137,048.
Claims priority of application No. 202010497244.0 (CN), filed on Jun. 2, 2020.
Prior Publication US 2021/0370970 A1, Dec. 2, 2021
Int. Cl. G05D 1/02 (2020.01); B60W 60/00 (2020.01); G06V 20/56 (2022.01); G06F 18/25 (2023.01)
CPC G05D 1/0246 (2013.01) [B60W 60/001 (2020.02); G06F 18/25 (2023.01); G06V 20/56 (2022.01); B60W 2420/403 (2013.01); B60W 2422/70 (2013.01); B60W 2520/28 (2013.01)] 10 Claims
OG exemplary drawing
 
1. A positioning method, wherein the method is applied to an autonomous driving vehicle, and the method comprises:
collecting first pose information measured by an inertial measurement unit within a preset time period, and collecting second pose information measured by a wheel tachometer within the time period, wherein the time period is a sampling time interval when a camera collects adjacent frame images;
generating positioning information according to the first pose information, the second pose information and the adjacent frame images; and
controlling driving of the autonomous driving vehicle according to the positioning information;
wherein the generating the positioning information according to the first pose information, the second pose information and the adjacent frame images comprises:
generating fused pose information by fusing the first pose information and the second pose information; and
generating the positioning information according to the adjacent frame images and the fused pose information;
wherein the generating the positioning information according to the adjacent frame images and the fused pose information comprises:
determining fused pose information that is in line with a preset error according to the adjacent frame images; and
extracting rotation information and displacement information from the fused pose information that is in line with the preset error, and determining the rotation information and the displacement information as the positioning information;
wherein the adjacent frame images comprise image coordinate information of a preset feature point; and the determining fused pose information that is in line with a preset error according to the adjacent frame images comprises:
inputting the image coordinate information and the fused pose information into a preset error model; and
obtaining a result outputted from the error model as the fused pose information that is in line with the preset error;
wherein the error model comprises an internal parameter of the camera and an external parameter of the camera, and the external parameter of the camera comprises a rotation parameter and a displacement parameter of the camera relative to the inertial measurement unit.