US 12,413,859 B2
Systems and methods for generating depth information from low-resolution images
Michael Bleyer, Seattle, WA (US); Christopher Douglas Edmonds, Carnation, WA (US); Antonios Matakos, Redmond, WA (US); and Raymond Kirk Price, Redmond, WA (US)
Assigned to Microsoft Technology Licensing, LLC, Redmond, WA (US)
Filed by Microsoft Technology Licensing, LLC, Redmond, WA (US)
Filed on Nov. 6, 2023, as Appl. No. 18/502,980.
Application 18/502,980 is a continuation of application No. 17/230,813, filed on Apr. 14, 2021, granted, now 11,849,220.
Prior Publication US 2024/0073523 A1, Feb. 29, 2024
This patent is subject to a terminal disclaimer.
Int. Cl. H04N 23/68 (2023.01); G06T 3/4076 (2024.01); G06T 7/55 (2017.01)
CPC H04N 23/682 (2023.01) [G06T 3/4076 (2013.01); G06T 7/55 (2017.01); H04N 23/689 (2023.01)] 20 Claims
OG exemplary drawing
 
1. A system for generating depth information from low-resolution images of a captured environment, the system comprising:
one or more processors; and
one or more hardware storage devices storing instructions that are executable by the one or more processors to configure the system to generate depth information from low-resolution images of a captured environment by configuring the system to:
access a plurality of image frames capturing an environment and acquired by one or more image capture devices;
identify a first group of image frames from the plurality of image frames;
generate a first image comprising a first composite image of the environment using the first group of image frames as input and using super-resolution imaging techniques, the first composite image comprising an image resolution that is higher than an image resolution of the image frames of the first group of image frames;
obtain a second image of the environment, wherein parallax exists between a capture perspective associated with the first composite image and a capture perspective associated with the second image, wherein the second image comprises an image resolution that is higher than the image resolution of the image frames of the first group of image frames; and
generate depth information for the environment based on the first composite image and the second image.