US 12,260,489 B2
System and method for generating large simulation data sets for testing an autonomous driver
Dan Atsmon, Rehovot (IL); Eran Asa, Petach-Tikva (IL); and Ehud Spiegel, Petach-Tikva (IL)
Assigned to Cognata Ltd., Rehovot (IL)
Filed by Cognata Ltd., Rehovot (IL)
Filed on May 29, 2023, as Appl. No. 18/202,970.
Application 18/202,970 is a continuation of application No. 17/383,465, filed on Jul. 23, 2021, granted, now 11,694,388.
Application 17/383,465 is a continuation of application No. 16/594,200, filed on Oct. 7, 2019, granted, now 11,100,371, issued on Aug. 24, 2021.
Application 16/594,200 is a continuation in part of application No. 16/237,806, filed on Jan. 2, 2019, granted, now 10,460,208, issued on Oct. 29, 2019.
Prior Publication US 2023/0306680 A1, Sep. 28, 2023
Int. Cl. G06T 15/10 (2011.01); G05D 1/00 (2024.01); G06F 18/21 (2023.01); G06F 18/214 (2023.01); G06F 18/24 (2023.01); G06F 18/28 (2023.01); G06N 3/04 (2023.01); G06N 3/08 (2023.01); G06T 3/047 (2024.01); G06T 7/55 (2017.01); G06T 7/579 (2017.01); G06V 20/56 (2022.01)
CPC G06T 15/10 (2013.01) [G05D 1/0088 (2013.01); G05D 1/0246 (2013.01); G06F 18/2148 (2023.01); G06F 18/217 (2023.01); G06F 18/24 (2023.01); G06F 18/28 (2023.01); G06N 3/04 (2013.01); G06N 3/08 (2013.01); G06T 3/047 (2024.01); G06T 7/55 (2017.01); G06T 7/579 (2017.01); G06V 20/56 (2022.01); G06T 2207/10028 (2013.01); G06T 2207/20081 (2013.01); G06T 2207/20084 (2013.01); G06T 2207/30252 (2013.01); G06V 2201/07 (2022.01)] 42 Claims
OG exemplary drawing
 
1. A system for validating an autonomous system, comprising a target sensor, comprising:
at least one hardware processor adapted to execute a code for:
using a machine learning model to compute a plurality of computed depth maps based on a plurality of real signals, the plurality of real signals are captured simultaneously from a common physical scene, each of the plurality of real signals are captured by one of a plurality of sensors, each of the plurality of computed depth maps qualifies one of the plurality of real signals;
applying a point of view transformation to the plurality of real signals and the plurality of computed depth maps, to produce synthetic data simulating the possible signal captured from the common physical scene by the target sensor in an identified position relative to the plurality of sensors; and
validating the autonomous system, comprising the target sensor, using the synthetic data.
 
36. A method for validating an autonomous system, comprising:
using a machine learning model to compute a plurality of computed depth maps based on a plurality of real signals, the plurality of real signals are captured simultaneously from a common physical scene, each of the plurality of real signals are captured by one of a plurality of sensors, each of the plurality of computed depth maps qualifies one of the plurality of real signals;
applying a point of view transformation to the plurality of real signals and the plurality of computed depth maps, to produce synthetic data simulating a possible signal captured from the common physical scene by a target sensor in an identified position relative to the plurality of sensors; and
validating the autonomous system, comprising the target sensor, using the synthetic data.
 
37. A system for validating an autonomous system, comprising a target sensor, comprising:
at least one hardware processor adapted to execute a code for:
producing synthetic data simulating a possible signal captured from a common physical scene by the target sensor in an identified position relative to a plurality of sensors, where the synthetic data is produced using a plurality of real signals, the plurality of real signals are captured simultaneously from the common physical scene by the plurality of sensors; and
validating the autonomous system, comprising the target sensor, using the synthetic data;
wherein producing the synthetic data using the plurality of real signals comprises:
using a machine learning model to compute a plurality of computed depth maps based on the plurality of real signals, the plurality of real signals are captured simultaneously from the common physical scene, each of the plurality of real signals are captured by one of the plurality of sensors, each of the plurality of computed depth maps qualifies one of the plurality of real signals; and
applying a point of view transformation to the plurality of real signals and the plurality of computed depth maps, to produce synthetic data simulating the possible signal captured from the common physical scene by the target sensor in an identified position relative to the plurality of sensors.