US 12,217,363 B2
Systems and methods for capturing, transporting, and reproducing three-dimensional simulations as interactive volumetric displays
Jaroslav Stehlik, Dobruska (CZ); Alexander Brook Perry, London (GB); Arthur Louis Brainville, Meslay du Maine (FR); Steffan Robert William Donal, Saint Albans (GB); Bunta Adrian-Nicolae, Cluj (RO); Omar Mohamed Ali Mudhir, Petriano (IT); and Ajinyad Karwan Shewki, Berlin (DE)
Assigned to LIV, INC., Wilmington, DE (US)
Filed by LIV, INC., Wilmington, DE (US)
Filed on Sep. 22, 2023, as Appl. No. 18/371,815.
Application 18/371,815 is a continuation of application No. 17/978,640, filed on Nov. 1, 2022, granted, now 11,769,299.
Claims priority of provisional application 63/406,392, filed on Sep. 14, 2022.
Prior Publication US 2024/0185526 A1, Jun. 6, 2024
This patent is subject to a terminal disclaimer.
Int. Cl. G06T 19/00 (2011.01); G06T 7/50 (2017.01); G06T 7/80 (2017.01); G06T 7/90 (2017.01); G06T 17/20 (2006.01); H04N 13/351 (2018.01); H04N 19/597 (2014.01)
CPC G06T 17/20 (2013.01) [G06T 7/50 (2017.01); G06T 7/80 (2017.01); G06T 7/90 (2017.01); G06T 19/006 (2013.01); H04N 13/351 (2018.05); H04N 19/597 (2014.11)] 21 Claims
OG exemplary drawing
 
1. A method for processing mixed reality 3D volumetric environment data comprised of a simulated portion and a real-world portion so that the mixed reality 3D volumetric environment can be viewed interactively by at least one final user, the method using a plurality of virtual cameras in the simulated portion and at least one real-world video camera focused in the same position and orientation in the mixed reality 3D environment as at least one virtual camera, the method comprising:
(a) capturing from the virtual cameras the simulated portion of the mixed reality 3D volumetric environment with at least the following data in a format using time tags:
(i) intrinsic data from each of the virtual cameras on a frame-by-frame basis,
(ii) extrinsic data from each of the virtual cameras on a frame-by-frame basis, and
(iii) depth data from each of the virtual cameras on a frame-by-frame basis;
(b) capturing from the at least one real-world video camera portion of the mixed reality 3D volumetric environment at least the following data in a format using time tags:
(i) intrinsic data from the at least one real-world video camera on a frame-by-frame basis of a real-world actor in the field of view of the real-world video camera, and
(ii) extrinsic data from the at least one real-world video camera on a frame-by-frame basis, and
(c) compressing the time-tagged data captured in steps (a) and (b) by:
(i) compressing the frame-by-frame intrinsic and extrinsic data from each of the virtual capture cameras and each of the real-world cameras, and
(ii) compressing the frame-by-frame depth data from each of the virtual capture cameras and each of the real-world cameras; (d) preparing the time-tagged data compressed in step (c) for viewing by the at least one final user by:
(i) tessellate shading each virtual camera frame as a grid of uniformly spaced unconnected quadratures,
(ii) converting each of the quadratures into a specific location within the simulated 3D volumetric environment,
(iii) positioning each vertex of the quadratures in the 3D volumetric environment based on an X/Y position in each virtual camera's frame and depth, the depth being determined using the depth data for each frame, and
(iv) isolating the real-world actor images in the field of view of the real-world video camera from any real-world background and overlaying the isolated real-world actor images over the quadratures in the same position and orientation in the simulated portion of the mixed reality 3D environment as the at least one real-world video camera, thereby enabling the mixed reality 3D volumetric environment to be viewed interactively by the at least one final user.