CPC G06T 17/05 (2013.01) [B64C 39/024 (2013.01); B64U 20/87 (2023.01); G01C 21/165 (2013.01); G01S 17/86 (2020.01); G01S 17/89 (2013.01); G05D 1/106 (2019.05); G06T 7/74 (2017.01); G06T 7/75 (2017.01); G06T 19/20 (2013.01); B64U 2101/30 (2023.01); G06T 2207/10028 (2013.01); G06T 2207/10032 (2013.01); G06T 2207/30181 (2013.01); G06T 2219/2016 (2013.01)] | 16 Claims |
1. A method for modelling a poor texture tunnel based on a vision-lidar coupling, which is modelled using an unmanned aerial vehicle (UAV) equipped with a depth camera and a lidar, comprising:
S1, obtaining point cloud information collected by the depth camera, laser information collected by the lidar, and motion information of the UAV;
S2, generating a raster map through filtering the laser information and obtaining pose information of the UAV based on the motion information;
S3, obtaining a map model through fusing the point cloud information, the raster map, and the pose information by a Bayesian fusion method; and
S4, obtaining a new map model by repeating the S1 to the S3, correcting a latest map model by feature matching based on a previous map model,
wherein in the S4, the correcting a latest map model by feature matching based on a previous map model includes:
S41, obtaining the previous may model as a reference frame, obtaining the latest map model, and finding an area corresponding to the previous map model from the latest may model as a current frame;
S42, denoting feature points in the reference frame by {Pi}, and denoting feature points in the current frame by {Qi}, and a number of feature points in the current frame being the same as that in the reference frame;
S43, constructing an Inter-frame change model;
{Qi}=R{Pi}+T,
where R represents a rotation parameter and T represents a translation parameter;
S44, substituting the feature points in the reference frame and the feature points in the current frame, and iteratively calculation the rotation parameters and the translation parameters; and
S45, obtaining a matching relationship between the previous map model and the latest map model based on the rotation parameters and the translation parameters, and correcting the latest mac model based on the matching relationship;
wherein in the S44, the iteratively calculating the rotation parameters and the translation parameters includes:
substituting the feature points in the reference frame and the feature points in the current frame into the inter-frame change model, establishing an objective function based on the inter-frame change model, and defining a function value of the objective function as a minimum rotation parameter and a minimum translation parameter, which are the final calculated rotation parameter and translation parameter; wherein a formula of the objective function is:
where L represents the function value of the objective function, pi represents a feature point in the reference frame, qi represents a feature point in the current frame, and N represents the number of feature points.
|