US 11,810,278 B2
Low light and thermal image normalization for advanced fusion
Raymond Kirk Price, Redmond, WA (US); Michael Bleyer, Seattle, WA (US); and Christopher Douglas Edmonds, Carnation, WA (US)
Assigned to Microsoft Technology Licensing, LLC, Redmond, WA (US)
Filed by Microsoft Technology Licensing, LLC, Redmond, WA (US)
Filed on May 3, 2021, as Appl. No. 17/306,681.
Prior Publication US 2022/0351345 A1, Nov. 3, 2022
Int. Cl. G06T 5/50 (2006.01); G06T 7/13 (2017.01); G06T 7/33 (2017.01); G06T 3/40 (2006.01); G06T 5/00 (2006.01); G06T 7/40 (2017.01); H04N 5/265 (2006.01); H04N 23/90 (2023.01)
CPC G06T 5/50 (2013.01) [G06T 3/40 (2013.01); G06T 5/002 (2013.01); G06T 7/13 (2017.01); G06T 7/33 (2017.01); G06T 7/40 (2013.01); H04N 5/265 (2013.01); H04N 23/90 (2023.01); G06T 2207/10024 (2013.01); G06T 2207/10048 (2013.01); G06T 2207/20221 (2013.01)] 10 Claims
OG exemplary drawing
 
1. A method for mitigating effects of noise when fusing multiple images together to generate an enhanced image, said method comprising:
generating a first image of an environment using a first camera of a first modality;
generating a second image of the environment using a second camera of a second modality;
identifying pixels that are common between the first image and the second image;
determining a first set of textures for the common pixels in the first image;
determining a second set of textures for the common pixels in the second image;
identifying a camera characteristic of the first camera, wherein the camera characteristic is associated with noise that may be present in the first image;
based on the identified camera characteristic, applying a scaling factor to the first set of textures to generate a scaled set of textures, wherein applying the scaling factor operates to mitigate an effect of noise that is potentially present in the first image;
using the scaled set of textures to determine a first saliency of the first image, wherein the first saliency reflects an amount of texture variation that is present in the scaled set of textures;
determining a second saliency of the second image, wherein the second saliency reflects an amount of texture variation that is present in the second set of textures;
performing edge detection on the first image and the second image, wherein the edge detection is performed using only the common pixels that were identified in the first image and the second image;
generating an alpha map that reflects edge detection weights that have been computed for each one of the common pixels, wherein the alpha map is based on the first saliency and the second saliency, and wherein the edge detection weights are generated based on the edge detection; and
based on the alpha map, merging textures from the common pixels included in the first image and the second image to generate a fused enhanced image.