US 12,094,143 B2
Image-based environment reconstruction
Mikko Strandborg, Hangonkylä (FI); and Petteri Timonen, Helsinki (FI)
Assigned to Varjo Technologies Oy, Helsinki (FI)
Filed by Varjo Technologies Oy, Helsinki (FI)
Filed on Dec. 10, 2021, as Appl. No. 17/547,769.
Prior Publication US 2023/0186500 A1, Jun. 15, 2023
Int. Cl. G06T 7/557 (2017.01); G06T 7/73 (2017.01); G06T 19/00 (2011.01)
CPC G06T 7/557 (2017.01) [G06T 7/74 (2017.01); G06T 19/006 (2013.01); G06T 2207/10028 (2013.01)] 23 Claims
OG exemplary drawing
 
1. A computer-implemented method comprising:
capturing visible-light images of a given real-world environment via at least one visible-light camera from a plurality of view points in the given real-world environment, wherein 3D positions of the plurality of view points are represented in a given coordinate system;
dividing a 3D space occupied by the given real-world environment into a 3D grid of convex-polyhedral regions, wherein the 3D grid is represented in the given coordinate system;
creating a 3D data structure comprising a plurality of nodes, each node representing a corresponding convex-polyhedral region of the 3D space occupied by the given real-world environment;
determining 3D positions of pixels of the visible-light images in the given coordinate system, based on the 3D positions of corresponding view points from which the visible-light images are captured;
dividing each visible-light image into a plurality of portions, wherein 3D positions of pixels of a given portion of said visible-light image fall inside a corresponding convex-polyhedral region of the 3D space; and
storing, in each node of the 3D data structure, corresponding portions of the visible-light images whose pixels' 3D positions fall inside a corresponding convex-polyhedral region of the 3D space,
wherein for a visible-light image captured from a given view point, each portion of the visible-light image is stored in a corresponding node along with orientation information pertaining to said portion,
the computer-implemented method further comprising, for a given view point from a perspective of which a given visible-light image is to be reconstructed using the 3D data structure,
determining a set of visible nodes whose corresponding convex-polyhedral regions are visible from the given view point;
for a given visible node of said set, selecting, from amongst portions of the visible-light images stored in the given visible node, a portion of a visible-light image whose orientation information indicates at least one direction which matches a direction of a given depth axis of the given visible-light image from the given view point or a view direction from the given view point to a convex-polyhedral region corresponding to the given visible node; and
reconstructing the given visible-light image from individual portions of the visible-light images that are selected for each visible node of said set,
wherein the step of reconstructing comprises warping the individual portions of the visible-light images that are selected for each visible node of said set to generate the given visible-light image.