US 12,280,800 B2
Method for driving in blind spot of sensor mounted on autonomous vehicle via communication with server and computing device using the same
Ki Cheol Shin, Seongnam-si (KR); Myeong Seon Heo, Seoul (KR); Byung Yong You, Seongnam-si (KR); and Ji Hyeong Han, Anyang-si (KR)
Assigned to Autonomous A2Z, Gyeongsan-si (KR)
Filed by Autonomous A2Z, Gyeongsan-si (KR)
Filed on Oct. 10, 2022, as Appl. No. 17/962,713.
Claims priority of application No. 10-2022-0112878 (KR), filed on Sep. 6, 2022.
Prior Publication US 2024/0075953 A1, Mar. 7, 2024
Int. Cl. B60W 60/00 (2020.01)
CPC B60W 60/0015 (2020.02) [B60W 2520/06 (2013.01); B60W 2520/10 (2013.01); B60W 2552/50 (2020.02); B60W 2554/20 (2020.02); B60W 2554/4044 (2020.02); B60W 2554/802 (2020.02); B60W 2555/60 (2020.02); B60W 2556/40 (2020.02); B60W 2556/45 (2020.02); B60W 2720/10 (2013.01); B60W 2720/106 (2013.01)] 14 Claims
OG exemplary drawing
 
1. A method for driving in a blind spot of a sensor, mounted on an autonomous vehicle, via communication with a server, comprising steps of:
(a) a computing device of the autonomous vehicle running on a specific road (i) locating the autonomous vehicle by using at least one of precision map information, sensor information and GPS (Global Positioning System) information, and (ii) in response to determining that the autonomous vehicle is expected to encounter a specific event within a specific time period as a result of locating the autonomous vehicle, transmitting vehicle location data corresponding to the result of locating the autonomous vehicle, sensor location data corresponding to a location where the sensor is mounted on the autonomous vehicle, direction data corresponding to a travelling direction of the autonomous vehicle, vehicle structure data of the autonomous vehicle and viewing angle data of the sensor to the server, to thereby query the server to determine whether a region of interest corresponding to the specific event to be encountered on the specific road is included in blind spot candidates for the blind spot of the sensor;
(b) the computing device receiving blind spot stereoscopic data from the server, wherein the blind spot stereoscopic data is computed by referring to the vehicle location data, the sensor location data, the direction data, the vehicle structure data, the viewing angle data and three-dimensional (3D) occlusion environmental data corresponding to at least one occluding static object included in the blind spot candidates; and
(c) the computing device controlling movement of the autonomous vehicle by referring to the blind spot stereoscopic data so that the autonomous vehicle drives in the blind spot,
wherein, at the step of (b), the computing device acquires the blind spot stereoscopic data, including data on a horizontal-direction occluding region and data on a vertical-direction occluding region, from the server,
wherein the computing device defines blind spot determining processes into a horizontal-direction blind spot determining process and a vertical-direction blind spot determining process, wherein (i) the horizontal-direction blind spot determining process determines the blind spot based on a horizontal plane comprised of an x-axis and a y-axis from the sensor installed on a certain location of the autonomous vehicle while a positive direction of the x-axis is assumed to be the travelling direction of the autonomous vehicle and a positive direction of the y-axis is assumed to be a left direction from the autonomous vehicle, and (ii) the vertical-direction blind spot determining process determines the blind spot based on a vertical plane comprised of the x-axis and a z-axis from the sensor installed on the certain location of the autonomous vehicle while the positive direction of the x-axis is assumed to be the travelling direction of the autonomous vehicle and a positive direction of the z-axis is assumed to be a direction vertically upwards from the autonomous vehicle, and
wherein the computing device instructs the server to perform, as the horizontal-direction blind spot determining process, sub-processes of (i) acquiring coordinates of the sensor installed on the certain location, (ii) detecting a plurality of occlusion-related points, included within a horizontal-direction viewing angle of the sensor, among a plurality of points on boundary lines of a horizontal cross-section of the occluding static object included in the three-dimensional occluding environmental data and selecting a first linear line and a second linear line from a plurality of linear lines connecting the coordinates of the sensor to the plurality of occlusion-related points, wherein an angle between the first linear line and the second linear line is the biggest among angles between any two different linear lines selected from the plurality of linear lines and wherein the second linear line forms a smaller angle with the x-axis than the first linear line with the x-axis, (iii) defining an intersection between the first linear line and a boundary line of a side of the horizontal cross-section facing the autonomous vehicle and an intersection between the second linear line and the boundary line of the side of the horizontal cross-section facing the autonomous vehicle respectively as a first point and a second point, defining the first linear line and the second linear line respectively as A1X−Y+B1=0 and A2X−Y+B2=0, and defining a third linear line connecting the first point and the second point as A3X−Y+B3=0, and (iv) determining whether the region of interest corresponding to the specific event is included in the horizontal-direction occluding region simultaneously satisfying inequations, A1X−Y+B1<0, A2X−Y+B2>0, A3X−Y+B3>0, of the first linear line to the third linear line, and then the computing device receives a result thereof from the server, and
wherein the computing device instructs the server to perform, as the vertical-direction blind spot determining process, sub-processes of (i) acquiring the coordinates of the sensor installed on the certain location, (ii) detecting a plurality of occlusion-related points, included within a vertical-direction viewing angle of the sensor, among a plurality of points on boundary lines of a vertical cross-section of the occluding static object included in the three-dimensional occluding environmental data and selecting a fourth linear line and a fifth linear line from a plurality of linear lines connecting the coordinates of the sensor to the plurality of occlusion-related points, wherein an angle between the fourth linear line and the fifth linear line is the biggest among angles between any two different linear lines selected from the plurality of linear lines, (iii) defining an intersection between the fourth linear line and a boundary line of a side of the vertical cross-section facing the autonomous vehicle and an intersection between the fifth linear line and the boundary line of the side of the vertical cross-section facing the autonomous vehicle respectively as a fourth point and a fifth point wherein the fifth point is located closer to a ground than the fourth point to the ground, defining the fourth linear line and the fifth linear line respectively as C1X−Z+D1=0 and C2X−Z+D2=0, and defining a sixth linear line connecting the fourth point and the fifth point as C3X−Z+D3=0, and (iv) determining whether the region of interest corresponding to the specific event is included in the vertical-direction occluding region not surrounded by the fourth linear line, the fifth linear line and the sixth linear line, and then the computing device receives a result thereof from the server.