US 12,260,563 B1
Warping simulator method of detecting and accounting for true motion of imagery subject to atmospheric turbulence
Russell Hardie, Centerville, OH (US); and Richard Van Hook, Xenia, OH (US)
Assigned to United States of America as represented by the Secretary of the Air Force, Wright-Patterson AFB, OH (US)
Filed by Russell Hardie, Centerville, OH (US); and Richard Van Hook, Xenia, OH (US)
Filed on Jul. 9, 2022, as Appl. No. 17/861,173.
Claims priority of provisional application 63/284,233, filed on Nov. 30, 2021.
Claims priority of provisional application 63/230,200, filed on Aug. 6, 2021.
This patent is subject to a terminal disclaimer.
Int. Cl. G06T 7/33 (2017.01); G06T 3/18 (2024.01); G06T 5/20 (2006.01); G06T 7/246 (2017.01); H04N 23/68 (2023.01)
CPC G06T 7/248 (2017.01) [G06T 3/18 (2024.01); G06T 5/20 (2013.01); G06T 7/337 (2017.01); H04N 23/6811 (2023.01); H04N 23/683 (2023.01); G06T 2207/10016 (2013.01); G06T 2207/20032 (2013.01); G06T 2207/20076 (2013.01)] 20 Claims
OG exemplary drawing
 
1. A method of determining the likelihood a pixel in a video image subject to optical turbulence represents true motion, the method comprising the steps of:
a. capturing incoming light from a scene with an image capture device to yield an original video having a sequence of frames;
b. creating a prototype image from the sequence of frames, the prototype image being a geometrically accurate representation of the scene and substantially free of contamination caused by true scene motion;
c. estimating a local atmospheric turbulence profile;
d. applying geometric distortions to the prototype image, the geometric distortions being statistically consistent with the estimated atmospheric turbulence profile; thereby creating multiple realizations of the prototype image without substantially duplicating one or more blurring effects;
e. selecting a subject pixel location from each pixel location available in the prototype image;
f. extracting a finite window for each subject pixel location, whereby the subject pixel location is contained within the window;
g. forming a multivariate Gaussian model having a probability density function associated with a respective pixel location, whereby an intensity of each pixel in the window is treated as a separate variable;
h. repeating steps e through g for all pixel locations;
i. evaluating each pixel at the subject pixel location in the original image sequence against the probability density function from the multivariate Gaussian model to determine a likelihood that each pixel at the subject pixel location represents true motion within the original video; and
j. repeating step i for all pixels locations and all frames within the prototype image have been evaluated and probabilities therefor have been determined.