US 11,853,061 B2
Autonomous vehicle controlled based upon a lidar data segmentation system
Andrea Allais, San Francisco, CA (US); Micah Christopher Chambers, Oakland, CA (US); William Gongshu Xie, San Francisco, CA (US); Adam Samuel Cadien, San Francisco, CA (US); and Elliot Branson, San Francisco, CA (US)
Assigned to GM GLOBAL TECHNOLOGY OPERATIONS LLC, Detroit, MI (US)
Filed by GM GLOBAL TECHNOLOGY OPERATIONS LLC, Detroit, MI (US)
Filed on Sep. 30, 2021, as Appl. No. 17/491,453.
Application 17/491,453 is a continuation of application No. 16/054,065, filed on Aug. 3, 2018, granted, now 11,204,605.
Prior Publication US 2022/0019221 A1, Jan. 20, 2022
Int. Cl. G05D 1/00 (2006.01); G05D 1/02 (2020.01); B60W 10/06 (2006.01); B60W 10/20 (2006.01); G06N 3/08 (2023.01); B60W 10/18 (2012.01); G01S 17/931 (2020.01); G01S 7/48 (2006.01)
CPC G05D 1/0088 (2013.01) [B60W 10/06 (2013.01); B60W 10/18 (2013.01); B60W 10/20 (2013.01); G01S 17/931 (2020.01); G05D 1/0212 (2013.01); G05D 1/0238 (2013.01); G06N 3/08 (2013.01); G01S 7/4802 (2013.01); G05D 2201/0212 (2013.01)] 20 Claims
OG exemplary drawing
 
1. An autonomous vehicle (AV) comprising:
a lidar sensor; and
a computing system that is in communication with the lidar sensor, wherein the computing system comprises:
a processor; and
memory that stores instructions that, when executed by the processor, cause the processor to perform acts comprising:
receiving lidar data, the lidar data based upon output of the lidar sensor, the lidar data comprising a plurality of points representative of positions of objects in a driving environment of the AV;
providing a first input feature that pertains to a first point in the lidar data to a deep neural network (DNN), wherein the first input feature is a distance from the first point to a next-closest point in the plurality of points, wherein the DNN generates first output that pertains to the first point responsive to receiving the first input feature;
providing a second input feature that pertains to a second point in the lidar data to the DNN, wherein the DNN generates second output that pertains to the second point responsive to receiving the second input feature; and
assigning respective labels to the first point and the second point based upon the first output and the second output of the DNN, wherein the labels indicate that the first point and the second point are representative of a same object in the driving environment.