| CPC B60W 50/0098 (2013.01) [B60R 16/0231 (2013.01); B60W 60/001 (2020.02); B60W 2420/408 (2024.01); B60W 2552/53 (2020.02); B60W 2554/00 (2020.02); B60W 2555/60 (2020.02); B60W 2556/35 (2020.02)] | 3 Claims |

|
1. An endogenous guarantee method for functional safety and network security of an intelligent connected vehicle perception decision-making module comprising the following steps:
(1) designing and implementing multiple heterogeneous perception and decision-making units and deploying the multiple heterogeneous perception and decision-making units on vehicles, wherein:
each of the multiple heterogeneous perception and decision-making units comprises a higher-level perception and decision-making unit and several lower-level perception and decision-making units;
the higher-level perception and decision-making unit is configured to support autopilot L3, L4 or L5, while the lower-level perception and decision-making units support L3 or L2;
the higher-level perception and decision-making unit is configured to be a main brain of autopilot comprising light detection and ranging (LIDAR), radio detection and ranging radar, camera, a global positioning system (GPS), an inertial measurement unit (IMU), an odometer sensor, a traffic signal detection (TSD) subsystem, a moving object tracking (MOT) subsystem, a mapper subsystem, and a localizer subsystem to support the autopilot L3, L4 or L5;
the higher-level perception and decision-making unit is configured to receive sensor data and fuses the sensor data to obtain needed information for a vehicle to complete a task, wherein the information includes information of pedestrians, vehicles and obstacles, information of lane lines, driving areas, traffic signs and signals, and information of unmanned vehicle positioning and a map based on the GPS and an IMU inertial navigation;
based on the sensor data, the main brain combines prior information of a road network, traffic rules and automotive dynamics to form route planning, path planning, behavior selection, motion planning, obstacle avoidance and control decision-making;
the multiple low-level perception and decision units comprise components and perception functions required to support autopilot L2 or L3; the components comprising radio detection and ranging radar, camera, autopilot sensors of L2 or L3, and the functions comprising obstacle detection, collision warning, lane detection and lane deviation warning;
the higher-level perception and decision-making unit comprises an L4 self-automatic driving perceptual decision unit which forms decision results for auto-driving and is configured to be sent to various electronic control unit (ECU) execution components of the vehicle to achieve control over the vehicle;
the decision results are set to: F={x, y, z, w, u, v . . . }, where x, y, z, w, u, v . . . represents the decision results given by the perceptual decision unit, including turning, accelerating, braking, parking;
the multiple relatively low-level perception and decision-making units comprise one L3 and two L2 perceptual decision units and are configured to form auto-driving decision results based on the perceptual results, wherein the decision results are: W={x, y, z, w, u . . . }, U={x, y, z, w}, and V={x, y, z, w};
U∩V∩W∩F=(x, y, z, w), where x, y, z, w are true-value type data, indicating whether to turn left, right, accelerate or brake, respectively;
the high-level perception and decision-making unit and the multiple low-level perception and decision-making units are all designed with different hardware platforms including field bus, arm and x86 platforms, different operating systems, and different perceptual decision modules;
(2) during vehicle driving, each perception and decision-making unit makes decisions based on the perceived information, wherein:
the decision results incorporate information perceived from multiple dimensions, comprising video information, radar information, vehicle location, speed, and acceleration;
a decision algorithm for each unit covers at least three cases:
a turn decision is made based on the perceived results and issues turn instructions, including a target turn angle and a target angular speed of a steering wheel;
a brake command is sent when a headway τ=l/v is detected to be less than a certain value, where l represents a distance from a shop and v represents a speed of the vehicle;
a brake command is sent when a time-to-collision ttc=l/v1−v2) is detected to be less than a certain value, where l represents a distance from a shop, v1 represents a speed of the vehicle and v2 indicates a speed of a vehicle ahead or a speed of a pedestrian,
(3) the decision results of the high-level unit and the two low-level units of the four perception and decision-making units are sent to an arbiter for adjudication, and another low-level perceptual decision unit is always online, but not participate in adjudication temporarily; wherein:
a decision result of the i-th perceptive decision module sent to the arbiter is represented as <xi,yi,zi,wi|ai,bi,ci,di>, where i=1,2,3,xi,yi,zi,wi is true-value type data, indicating whether to turn left, right, accelerate and brake; ai, bi, ci, di is floating point data, representing the target angle of left turn, the target angle of right turn, the acceleration and the braking force respectively; the adjudication process is divided into two stages, comprising:
a precise decision stage, wherein the arbiter judges whether (x1,yi,z1,w1)=(x2,y2,z2,w2)=(x3,y3,z3,w3) is true, and if so, the arbiter enters an approximate adjudication stage, otherwise, it is considered that there is an unsafe perception and decision-making unit and outputs ⊥; and
an approximate adjudication stage, wherein for any i, j∈1,2,3, i≠j, the arbiter judges whether √(ai−aj)2+(bi−bj)2+(ci−cj)2+(di−dj)2≤θ is true, where θ indicates an approximate coefficient allowed by the system, wherein if the equation is true, the decision results of the higher-level perception and decision-making unit will be output; otherwise, the ⊥ will be output,
(4) when the arbiter outputs a decision result, the decision result is directly sent to the controller area network (CAN) bus, and the vehicle is configured to execute the command; otherwise, when the arbiter outputs ⊥, the vehicle will make the following processing according to the situation:
if there exists i, j∈1,2,3, i≠j, that makes (xi, yi, zi, wi)=(xj,yj,zj,wj) and √(ai−aj)2+(bi−bj)2+(ci−cj)2+(di−dj)2≤θ tenable, then replacing the k-th unit by using the low-level perceptual decision unit that is online but does not participate in the adjudication, where k∈{1,2,3}, k≠j;
otherwise, the vehicle will operate according to a preset bottom line security procedure until the vehicle stops or a user intervenes.
|