| CPC G06V 20/58 (2022.01) [B60W 40/04 (2013.01); B60W 2420/403 (2013.01); B60W 2420/408 (2024.01); B60W 2554/4041 (2020.02); B60W 2554/4049 (2020.02)] | 14 Claims |

|
1. A method of determining a vehicle position, comprising:
generating a vehicle-based point cloud, wherein the vehicle-based point cloud includes a plurality of vehicle-based objects in proximity to the vehicle detected by a sensor mounted to the vehicle, and wherein the vehicle-based point cloud is referenced to a vehicle-based coordinate system, wherein generating the vehicle-based point cloud of objects in proximity to the vehicle comprises:
receiving image data from the sensor mounted to the vehicle;
performing object detection on the image data received from the sensor mounted to the vehicle;
performing 2D visual feature extraction from the image data received from the sensor mounted to the vehicle;
labeling the 2D visual features from the image data received from the sensor mounted to the vehicle with their object type;
creating or updating a vehicle-based 3D point cloud with the 2D visual features from the image data received from the sensor mounted to the vehicle; and
calculating geographic coordinates in the vehicle-based coordinate system for points in the vehicle-based 3D point cloud with the 2D visual features from the received image data from the sensor mounted to the vehicle;
updating the vehicle-based point cloud from the plurality of vehicle-based objects detected in proximity to the vehicle detected by the sensor mounted to the vehicle;
receiving an infrastructure-based point cloud of a plurality of infrastructure-based objects detected by a sensor mounted at a fixed location external to the vehicle, wherein the infrastructure-based point cloud is generated by:
collecting image data from the sensor mounted at the fixed location external to the vehicle;
performing object detection on the collected image data from the sensor mounted at the fixed location external to the vehicle;
performing 2D visual feature extraction from the collected image data from the sensor mounted at the fixed location external to the vehicle;
labeling the 2D visual features from the collected image data from the sensor mounted at the fixed location external to the vehicle with their object type;
creating or updating an infrastructure-based 3D point cloud with the 2D visual features from the collected image data from the sensor mounted at the fixed location external to the vehicle; and
calculating geographic coordinates in a global coordinate system for points in the infrastructure-based 3D point cloud with the 2D visual features from the collected image data from the sensor mounted at the fixed location external to the vehicle;
receiving position information referenced to the global coordinate system for the plurality of infrastructure-based objects included in the infrastructure-based point cloud;
registering the plurality of vehicle-based objects in the vehicle-based point cloud with the plurality of infrastructure-based objects in the infrastructure-based point cloud to determine a relationship between the vehicle-based coordinate system and the global coordinate system;
using the position information about the plurality of infrastructure-based objects in the infrastructure-based point cloud and the relationship between the vehicle-based coordinate system and the global coordinate system to determine the vehicle position in the global coordinate system; and
using the determined vehicle position in the global coordinate system to command a vehicle action.
|