US 12,330,821 B2
Autonomous driving system with air support
Lei Xu, Hong Kong (CN); Erkang Cheng, Hong Kong (CN); and Tingting Zhu, Hong Kong (CN)
Assigned to Nullmax (Hong Kong) Limited, Hong Kong (CN)
Filed by Nullmax (Hong Kong) Limited, Hong Kong (CN)
Filed on Feb. 15, 2022, as Appl. No. 17/672,668.
Claims priority of provisional application 63/308,032, filed on Feb. 8, 2022.
Prior Publication US 2023/0252896 A1, Aug. 10, 2023
This patent is subject to a terminal disclaimer.
Int. Cl. B64U 10/13 (2023.01); B64U 101/30 (2023.01); G06V 10/82 (2022.01); G08G 1/01 (2006.01); G08G 1/0967 (2006.01); H04W 4/40 (2018.01)
CPC B64U 10/13 (2023.01) [G06V 10/82 (2022.01); G08G 1/0108 (2013.01); G08G 1/0125 (2013.01); G08G 1/0141 (2013.01); G08G 1/0145 (2013.01); G08G 1/096725 (2013.01); G08G 1/096766 (2013.01); H04W 4/40 (2018.02); B64U 2101/30 (2023.01)] 20 Claims
OG exemplary drawing
 
1. An autonomous driving system, comprising:
an unmanned aerial vehicle (UAV) in the air, wherein the UAV includes:
at least one UAV camera configured to collect first raw sensor information,
wherein the first raw sensor information comprises visual images and distances from ground objects to the UAV,
a UAV processor configured to convert the first raw sensor information to first ground traffic information in a three-dimensional (3D) coordinate system, and
a UAV communication module configured to transmit the collected first ground traffic information,
wherein the UAV communication module is further configured to transmit the collected first ground traffic information to one or more additional UAVs or one or more control centers; and
a land vehicle communicatively connected to the UAV in the air, wherein the land vehicle is configured to store the UAV, wherein the land vehicle is configured to charge the UAV, wherein the land vehicle is configured to release or launch the UAV, and wherein the land vehicle further comprises:
one or more vehicle sensors configured to collect second raw sensor information surrounding the land vehicle,
wherein the second raw sensor information comprises visual images and distances from surrounding objects to the land vehicle,
a land vehicle processor configured to convert the second raw sensor information to second ground traffic information in a two-dimensional (2D) coordinate system,
wherein the land vehicle processor is configured to determine a direction associated with the ground objects and the surrounding objects based on the images captured by the respective raw sensor information,
wherein the land vehicle processor is further configured to release the UAV to a position of following or leading the land vehicle,
wherein the land vehicle processor is further configured to release the UAV when the land vehicle is approaching an intersection and the UAV is configured to fly toward the intersection, leading the land vehicle,
wherein when the UAV is at the intersection, the sensors of the UAV are further configured to collect first ground traffic information comprising positions of crosswalks, land dividing lanes, curb sides, and other vehicles, and
a land vehicle communication module configured to receive the first ground traffic information from the UAV,
wherein the land vehicle processor is further configured to convert the first ground traffic information in the three-dimensional (3D) coordinate system to the two-dimensional (2D) coordinate system,
wherein the land vehicle processor is further configured to combine the first ground traffic information and the second ground traffic information to generate a world model, wherein the world model is a dataset that includes coordinates of one or more still objects, predicted trajectories of one or more moving objects, coordinates of one or more traffic signals, and one or more accessible areas, and
wherein the land vehicle processor is further configured to generate decisions for the land vehicle based on the world model.