US 12,072,442 B2
Object detection and detection confidence suitable for autonomous driving
Tommi Koivisto, Uusimaa (FI); Pekka Janis, Uusimaa (FI); Tero Kuosmanen, Uusimaa (FI); Timo Roman, Uusimaa (FI); Sriya Sarathy, Santa Clara, CA (US); William Zhang, Los Altos, CA (US); Nizar Assaf, Santa Clara, CA (US); and Colin Tracey, Santa Clara, CA (US)
Assigned to NVIDIA Corporation, Santa Clara, CA (US)
Filed by NVIDIA Corporation, San Jose, CA (US)
Filed on Nov. 22, 2021, as Appl. No. 17/456,045.
Application 17/456,045 is a continuation of application No. 16/277,895, filed on Feb. 15, 2019, granted, now 11,210,537, issued on Dec. 28, 2021.
Claims priority of provisional application 62/631,781, filed on Feb. 18, 2018.
Prior Publication US 2022/0101635 A1, Mar. 31, 2022
Int. Cl. G06V 10/46 (2022.01); B60W 50/00 (2006.01); G01S 7/41 (2006.01); G05D 1/00 (2006.01); G06F 16/35 (2019.01); G06F 18/21 (2023.01); G06F 18/214 (2023.01); G06F 18/23 (2023.01); G06F 18/2413 (2023.01); G06N 3/044 (2023.01); G06N 3/045 (2023.01); G06N 3/084 (2023.01); G06N 20/00 (2019.01); G06V 10/20 (2022.01); G06V 10/44 (2022.01); G06V 10/762 (2022.01); G06V 10/764 (2022.01); G06V 10/77 (2022.01); G06V 10/774 (2022.01); G06V 20/58 (2022.01); G01S 7/48 (2006.01); G01S 13/86 (2006.01); G01S 13/931 (2020.01); G01S 17/931 (2020.01); G06N 3/047 (2023.01); G06N 3/048 (2023.01)
CPC G01S 7/417 (2013.01) [B60W 50/00 (2013.01); G05D 1/0246 (2013.01); G06F 16/35 (2019.01); G06F 18/214 (2023.01); G06F 18/217 (2023.01); G06F 18/23 (2023.01); G06F 18/2414 (2023.01); G06N 3/044 (2023.01); G06N 3/045 (2023.01); G06N 3/084 (2013.01); G06N 20/00 (2019.01); G06V 10/255 (2022.01); G06V 10/454 (2022.01); G06V 10/46 (2022.01); G06V 10/762 (2022.01); G06V 10/764 (2022.01); G06V 10/7715 (2022.01); G06V 10/774 (2022.01); G06V 20/58 (2022.01); G06V 20/584 (2022.01); G01S 7/412 (2013.01); G01S 7/4802 (2013.01); G01S 13/867 (2013.01); G01S 2013/9318 (2020.01); G01S 2013/9323 (2020.01); G01S 17/931 (2020.01); G06N 3/047 (2023.01); G06N 3/048 (2023.01)] 20 Claims
OG exemplary drawing
 
1. A method comprising:
determining a region corresponding to a first object depicted in a training image for one or more machine learning models (MLMs);
determining the first object is depicted as being closer than a second object in the training image;
assigning coverage values to spatial element regions corresponding to the training image based at least on the spatial element regions at least partially falling within the region, wherein at least one coverage value of the coverage values is assigned to a spatial element region of the spatial element regions based at least on the first object being depicted as closer than the second object in the training image; and
training the one or more MLMs to infer the coverage values in association with detecting the first object in the spatial element regions.