CPC G06T 15/10 (2013.01) [G05D 1/0088 (2013.01); G05D 1/0246 (2013.01); G06F 18/217 (2023.01); G06F 18/2148 (2023.01); G06F 18/24 (2023.01); G06F 18/28 (2023.01); G06N 3/04 (2013.01); G06N 3/08 (2013.01); G06T 3/0018 (2013.01); G06T 7/55 (2017.01); G06T 7/579 (2017.01); G06V 20/56 (2022.01); G05D 2201/0213 (2013.01); G06T 2207/10028 (2013.01); G06T 2207/20081 (2013.01); G06T 2207/20084 (2013.01); G06T 2207/30252 (2013.01); G06V 2201/07 (2022.01)] | 42 Claims |
1. A system for creating synthetic data, comprising at least one hardware processor adapted to execute code for:
using a machine learning model to compute a plurality of computed depth maps based on a plurality of real signals, the plurality of real signals are captured simultaneously from a common physical scene, each of the plurality of real signals are captured by one of a plurality of sensors, each of the plurality of computed depth maps qualifies one of the plurality of real signals; and
applying a point of view transformation to the plurality of real signals and the plurality of computed depth maps, to produce synthetic data simulating a possible signal captured from the common physical scene by a target sensor in an identified position relative to the plurality of sensors.
|
36. A method for creating synthetic data, comprising:
using a machine learning model to compute a plurality of computed depth maps based on a plurality of real signals, the plurality of real signals are captured simultaneously from a common physical scene, each of the plurality of real signals are captured by one of a plurality of sensors, each of the plurality of computed depth maps qualifies one of the plurality of real signals; and
applying a point of view transformation to the plurality of real signals and the plurality of computed depth maps, to produce synthetic data simulating a possible signal captured from the common physical scene by a target sensor in an identified position relative to the plurality of sensors.
|
37. A system for training an autonomous system, comprising a target sensor, comprising:
at least one hardware processor adapted to execute a code for:
producing synthetic data simulating a possible signal captured from a common physical scene by the target sensor in an identified position relative to a plurality of sensors, where the synthetic data is produced using a plurality of real signals, the plurality of real signals are captured simultaneously from the common physical scene by the plurality of sensors; and
training the autonomous system using the synthetic data;
wherein producing the synthetic data using the plurality of real signals comprises:
using a machine learning model to compute a plurality of computed depth maps based on the plurality of real signals, the plurality of real signals are captured simultaneously from the common physical scene, each of the plurality of real signals are captured by one of the plurality of sensors, each of the plurality of computed depth maps qualifies one of the plurality of real signals;
applying a point of view transformation to the plurality of real signals and the plurality of computed depth maps, to produce synthetic data simulating the possible signal captured from the common physical scene by the target sensor in an identified position relative to the plurality of sensors; and
providing the synthetic data to at least one testing engine to train the autonomous system.
|