US 12,093,834 B2
Methods and systems for training and validating a perception system
Youval Nehmadi, Nili (IL); Shahar Ben Ezra, Or Yehuda (IL); Shmuel Mangan, Nes Ziona (IL); Mark Wagner, Rehovot (IL); Anna Cohen, Holon (IL); and Itzik Avital, Hod HaSharon (IL)
Assigned to Vaya Vision Sensing Ltd., Or Yehuda (IL)
Appl. No. 17/762,345
Filed by VayaVision Sensing Ltd., Or Yehuda (IL)
PCT Filed Sep. 22, 2020, PCT No. PCT/IL2020/051028
§ 371(c)(1), (2) Date Mar. 21, 2022,
PCT Pub. No. WO2021/053680, PCT Pub. Date Mar. 25, 2021.
Claims priority of provisional application 62/903,846, filed on Sep. 22, 2019.
Prior Publication US 2022/0335729 A1, Oct. 20, 2022
Int. Cl. G06N 3/088 (2023.01); G06V 10/25 (2022.01); G06V 10/764 (2022.01); G06V 10/774 (2022.01); G06V 10/776 (2022.01); G06V 10/82 (2022.01); G06V 20/58 (2022.01)
CPC G06N 3/088 (2013.01) [G06V 10/25 (2022.01); G06V 10/764 (2022.01); G06V 10/774 (2022.01); G06V 10/776 (2022.01); G06V 10/82 (2022.01); G06V 20/58 (2022.01)] 20 Claims
OG exemplary drawing
 
1. A computer-implemented method, comprising:
receiving first training signals from a set of reference sensors and receiving second training signals from a set of test sensors, the set of reference sensors and the set of test sensors simultaneously exposed to a common scene;
processing the first training signals to obtain reference images containing reference depth information associated with said scene;
using the second training signals and the reference images to train a neural network for transforming subsequent test signals from the set of test sensors into test images containing inferred depth information, wherein the set of reference sensors comprises at least a first lidar sensor and the set of test sensors comprises at least a second lidar sensor, the first lidar sensor having a higher resolution, a greater range or a wider field of view, than the second lidar sensor.