US 12,243,326 B2
Methods and system for controlling a vehicle using fusion of multi_modality perception data
Sai Vishnu Aluru, Commerce Township, MI (US); Steffen Peter Lindenthal, Oshawa (CA); Brian Yousif-Dickow, Farmington Hills, MI (US); and Ali Atriss, Canton, MI (US)
Assigned to GM GLOBAL TECHNOLOGY OPERATIONS LLC, Detroit, MI (US)
Filed by GM GLOBAL TECHNOLOGY OPERATIONS LLC, Detroit, MI (US)
Filed on Jul. 13, 2022, as Appl. No. 17/812,287.
Prior Publication US 2024/0020987 A1, Jan. 18, 2024
Int. Cl. G06K 9/00 (2022.01); B60W 40/10 (2012.01); G06F 18/25 (2023.01); G06V 20/58 (2022.01)
CPC G06V 20/58 (2022.01) [B60W 40/10 (2013.01); G06F 18/25 (2023.01); B60W 2420/403 (2013.01); B60W 2420/408 (2024.01); B60W 2510/18 (2013.01); B60W 2510/20 (2013.01); B60W 2554/4049 (2020.02)] 20 Claims
OG exemplary drawing
 
12. A vehicle system for a vehicle, the system comprising:
at least one camera that is incorporated into the vehicle;
at least one perception sensor that is incorporated into the vehicle; and
at least one processor in operable communication with the camera and the perception sensor, the at least one processor configured to execute program instructions, wherein the program instructions are configured to cause the at least one processor to:
receive a frame of visible image data from the at least one camera;
receive a frame of invisible perception data from a perception sensor;
fuse the frame of invisible perception data and the frame of visible image data to provide a fused frame of perception data;
perform object detection, classification and tracking using a machine learning algorithm via a neural network based on the fused frame of perception data to provide object detection data;
determine a usability score for the frame of visible image data, based on each of the following: a repetitive constant change in contrast within the frame of the visible image data, a luminance of the frame of the visible image data, a signal to noise ratio of the frame of the visible image data, and a number of edges of the frame of the visible image data;
perform the object detection, classification and tracking using the machine learning algorithm based on the fused frame of perception data when the usability score is less than a predetermined value and based on the frame of visible image data when the usability score is greater than the predetermined value; and
control steering, propulsion, and braking of the vehicle based on the object detection data, including based on the object detection, classification and tracking using the machine learning algorithm based on the fused frame of perception data when the usability score is less than a predetermined value and based on the frame of visible image data when the usability score is greater than the predetermined value.