US 12,105,522 B2
Multi-sensor-fusion-based autonomous mobile robot indoor and outdoor navigation method and robot
Xuefeng Zhou, Guangzhou (CN); Zerong Su, Guangzhou (CN); Zhihao Xu, Guangzhou (CN); and Guanrong Tang, Guangzhou (CN)
Assigned to Institute of Intelligent Manufacturing, GDAS, Guangzhou (CN)
Filed by Institute of Intelligent Manufacturing, GDAS, Guangdong (CN)
Filed on Jan. 6, 2022, as Appl. No. 17/569,949.
Claims priority of application No. 202111168188.7 (CN), filed on Oct. 8, 2021.
Prior Publication US 2023/0116869 A1, Apr. 13, 2023
This patent is subject to a terminal disclaimer.
Int. Cl. G05D 1/00 (2024.01)
CPC G05D 1/0278 (2013.01) [G05D 1/0212 (2013.01); G05D 1/0255 (2013.01)] 18 Claims
OG exemplary drawing
 
1. A multi-sensor-fusion-based autonomous mobile robot indoor and outdoor navigation method, comprising:
acquiring GPS information, inertial measurement data and three-dimensional point cloud data of a robot at a current position;
determining a pose change of the robot based on the inertial measurement data of the robot at the current position;
performing distortion correction on the three-dimensional point cloud data of the robot at the current position based on the pose change of the robot;
acquiring, based on a correspondence relationship between GPS information and two-dimensional maps, a two-dimensional map corresponding to the GPS information of the robot at the current position, wherein connection lines between three-dimensional coordinates corresponding to any two points on the two-dimensional map are at a substantially same angle with a horizontal plane, and a pitch angle corresponding to the pose change during a movement of the robot on a same two-dimensional map is less than a set angle;
projecting the three-dimensional point cloud data after the distortion correction onto a road surface where the robot is currently moving, to form two-dimensional point cloud data of the robot at the current position by using points closest to the robot from the robot in various directions away from the robot;
matching the two-dimensional point cloud data of the robot at the current position with the two-dimensional map corresponding to the GPS information of the robot at the current position, and selecting a position with the highest matching degree as the current position of the robot; and
controlling the robot according to the selected position with the highest matching degree.