US 12,296,839 B2
Learning-model predictive control with multi-step prediction for vehicle motion control
Amir Khajepour, Waterloo (CA); Chao Yu, Waterloo (CA); Yubiao Zhang, Sterling Heights, MI (US); Qingrong Zhao, Troy, MI (US); and SeyedAlireza Kasaiezadeh Mahabadi, Novi, MI (US)
Assigned to GM GLOBAL TECHNOLOGY OPERATIONS LLC, Detroit, MI (US); and UNIVERSITY OF WATERLOO
Filed by GM Global Technology Operations LLC, Detroit, MI (US); and University of Waterloo, Waterloo (CA)
Filed on Nov. 30, 2022, as Appl. No. 18/060,023.
Prior Publication US 2024/0174246 A1, May 30, 2024
Int. Cl. B60W 50/10 (2012.01); B60W 40/10 (2012.01); B60W 50/00 (2006.01)
CPC B60W 50/10 (2013.01) [B60W 40/10 (2013.01); B60W 50/0097 (2013.01)] 14 Claims
OG exemplary drawing
 
1. A system for learning-model predictive control (LMPC) with multi-step prediction for motion control of a vehicle, the system comprising:
one or more sensors disposed on the vehicle, the one or more of sensors measuring real-time static and dynamic data about the vehicle;
one or more actuators disposed on the vehicle, the one or more actuators altering static and dynamic characteristics of the vehicle;
one or more control modules each having a processor, a memory, and input/output (I/O) ports in communication with the one or more sensors and the one or more actuators, the processor executing program code portions stored in the memory, the program code portions comprising:
a first program code portion that causes the one or more sensors and the one or more actuators to obtain vehicle state information;
a second program code portion that receives a driver input and generates a desired dynamic output based on the driver input and the vehicle state information;
a third program code portion that estimates actions of the one or more actuators based on the vehicle state information and the driver input; and
a fourth program code portion that utilizes the vehicle state information, the driver input, and the estimated actions of the one or more actuators to select one or more models of a physics-based vehicle model and a machine-learning model of the vehicle to selectively adjust commands to the one or more actuators,
wherein the fourth program code portion further receives the vehicle state information, the driver inputs, and the estimated actions of the one or more actuators within the LMPC, wherein the LMPC includes program code for an offline training application and a real-time application, wherein the offline training application further comprises program code that, upon receiving data from the one or more sensors and from the one or more actuators:
generates a dataset;
evaluates each data point of a plurality of data points in the dataset for similarity to other data points in the dataset;
removes repeated data from the dataset; and
upon determining that a new data point in the dataset is within a predefined Euclidean distance of a previous data point in the dataset, removes the previous data point and retains the new data point in the dataset, and upon determining that the new data point in the dataset is not within the predefined Euclidean distance of the previous data point in the dataset, retains the new data point in the data set; wherein each data point in the dataset corresponds to a distinct vehicle dynamic state, and wherein
the real-time application further comprises:
an online machine learning process that predicts actuator outputs for current vehicle state information based on accumulated data from real-time driving, wherein predicted actuator outputs are made according to a mean value and variance corresponding to a squared exponential kernel function.