US 12,230,006 B2
Removal of artifacts from images captured by sensors
Tsvi Lev, Tel-Aviv (IL)
Assigned to NEC Corporation Of America, Herzlia (IL)
Filed by NEC Corporation Of America, Herzlia (IL)
Filed on Jan. 12, 2022, as Appl. No. 17/573,780.
Prior Publication US 2023/0222761 A1, Jul. 13, 2023
Int. Cl. G06V 10/60 (2022.01); G06T 5/50 (2006.01); G06T 7/11 (2017.01); G06T 7/70 (2017.01); G06V 10/50 (2022.01); G06V 10/62 (2022.01)
CPC G06V 10/62 (2022.01) [G06T 5/50 (2013.01); G06T 7/11 (2017.01); G06T 7/70 (2017.01); G06V 10/50 (2022.01); G06V 10/60 (2022.01); G06T 2207/30252 (2013.01); G06V 2201/07 (2022.01)] 17 Claims
OG exemplary drawing
 
1. A vehicle sensor system, comprising:
a plurality of sensors with mostly overlapping field of views that simultaneously acquire temporary images;
a processing circuitry that:
analyzes the temporary images to identify at least one blocked image area in at least one temporary image of the plurality of temporary images which is less or not blocked in at least one spatially corresponding image area of at least one other temporary image of the plurality of temporary images, wherein identifying said at least one blocked image area is conducted by:
accessing a dataset that maps between pixels of a first sensor and corresponding field-of-view corrected pixels in a second sensor, that depict a same region of the overlapping field of views,
computing, for the first sensor, a first difference between a first image patch around a first pixel(s) of the temporary image acquired by the first sensor, and a corresponding first image patch around the first pixel(s) of at least one preceding temporary image acquired by the first sensor,
computing, for the second sensor, a second difference between a second image patch around the FOV corrected pixel(s) of the temporary image acquired by the second sensor corresponding to the first pixel(s) of the temporary image acquired by the first sensor, and a corresponding second image patch around the FOV corrected pixel(s) of at least one preceding temporary image acquired by the second sensor, and
when the first difference and the second difference are substantially different, identifying at least one blocked image area as the first image patch or the second image patch;
selects visual data from the at least one spatially corresponding image areas over corresponding visual data from the at least one blocked images area,
merges the plurality of temporary images into a final image using the selected visual data and excludes the at least one blocked image area; and
an output interface that forwards the final image to a vehicle controller.
 
16. A method for generating an image from a vehicle sensor system, comprising:
obtaining temporary images from a plurality of sensors with mostly overlapping field of views that simultaneously acquire the temporary images,
analyzing the temporary images to identify at least one blocked image area in at least one temporary image of the plurality of temporary images which is less or not blocked in at least one spatially corresponding image area of at least one other temporary image of the plurality of temporary images, wherein identifying said at least one blocked image area is conducted by:
accessing a dataset that maps between pixels of a first sensor and corresponding field-of-view corrected pixels in a second sensor, that depict a same region of the overlapping field of views,
computing, for the first sensor, a first difference between a first image patch around a first pixel(s) of the temporary image acquired by the first sensor, and a corresponding first image patch around the first pixel(s) of at least one preceding temporary image acquired by the first sensor,
computing, for the second sensor, a second difference between a second image patch around the FOV corrected pixel(s) of the temporary image acquired by the second sensor corresponding to the first pixel(s) of the temporary image acquired by the first sensor, and a corresponding second image patch around the FOV corrected pixel(s) of at least one preceding temporary image acquired by the second sensor, and
when the first difference and the second difference are substantially different, identifying at least one blocked image area as the first image patch or the second image patch;
selecting visual data from the at least one spatially corresponding image areas over corresponding visual data from the at least one blocked images area,
merging the plurality of temporary images into a final image using the selected visual data and excludes the at least one blocked image area; and
forwarding the final image to a vehicle controller.
 
17. A sensor system, comprising:
a plurality of sensors with mostly overlapping field of views that simultaneously acquire temporary images;
a processing circuitry that:
analyzes the temporary images to identify at least one blocked image area in at least one temporary image of the plurality of temporary images which is less or not blocked in at least one spatially corresponding image area of at least one other temporary image of the plurality of temporary images, wherein identifying said at least one blocked image area is conducted by:
accessing a dataset that maps between pixels of a first sensor and corresponding field-of-view corrected pixels in a second sensor, that depict a same region of the overlapping field of views,
computing, for the first sensor, a first difference between a first image patch around a first pixel(s) of the temporary image acquired by the first sensor, and a corresponding first image patch around the first pixel(s) of at least one preceding temporary image acquired by the first sensor,
computing, for the second sensor, a second difference between a second image patch around the FOV corrected pixel(s) of the temporary image acquired by the second sensor corresponding to the first pixel(s) of the temporary image acquired by the first sensor, and a corresponding second image patch around the FOV corrected pixel(s) of at least one preceding temporary image acquired by the second sensor, and
when the first difference and the second difference are substantially different, identifying at least one blocked image area as the first image patch or the second image patch;
selects visual data from the at least one spatially corresponding image areas over corresponding visual data from the at least one blocked images area,
merges the plurality of temporary images into a final image using the selected visual data and excludes the at least one blocked image area; and
an output interface that forwards the final image to a controller selected from a group comprising: surveillance, biometric, and security.