CPC G06T 17/20 (2013.01) [G06T 7/50 (2017.01); G06T 7/80 (2017.01); G06T 7/90 (2017.01); G06T 19/006 (2013.01); H04N 13/351 (2018.05); H04N 19/597 (2014.11)] | 21 Claims |
1. A method for processing mixed reality 3D volumetric environment data comprised of a simulated portion and a real-world portion so that the mixed reality 3D volumetric environment can be viewed interactively by at least one final user, the method using a plurality of virtual cameras in the simulated portion and at least one real-world video camera focused in the same position and orientation in the mixed reality 3D environment as at least one virtual camera, the method comprising:
(a) capturing from the virtual cameras the simulated portion of the mixed reality 3D volumetric environment with at least the following data in a format using time tags:
(i) intrinsic data from each of the virtual cameras on a frame-by-frame basis,
(ii) extrinsic data from each of the virtual cameras on a frame-by-frame basis, and
(iii) depth data from each of the virtual cameras on a frame-by-frame basis;
(b) capturing from the at least one real-world video camera portion of the mixed reality 3D volumetric environment at least the following data in a format using time tags:
(i) intrinsic data from the at least one real-world video camera on a frame-by-frame basis of a real-world actor in the field of view of the real-world video camera, and
(ii) extrinsic data from the at least one real-world video camera on a frame-by-frame basis, and
(c) compressing the time-tagged data captured in steps (a) and (b) by:
(i) compressing the frame-by-frame intrinsic and extrinsic data from each of the virtual capture cameras and each of the real-world cameras, and
(ii) compressing the frame-by-frame depth data from each of the virtual capture cameras and each of the real-world cameras; (d) preparing the time-tagged data compressed in step (c) for viewing by the at least one final user by:
(i) tessellate shading each virtual camera frame as a grid of uniformly spaced unconnected quadratures,
(ii) converting each of the quadratures into a specific location within the simulated 3D volumetric environment,
(iii) positioning each vertex of the quadratures in the 3D volumetric environment based on an X/Y position in each virtual camera's frame and depth, the depth being determined using the depth data for each frame, and
(iv) isolating the real-world actor images in the field of view of the real-world video camera from any real-world background and overlaying the isolated real-world actor images over the quadratures in the same position and orientation in the simulated portion of the mixed reality 3D environment as the at least one real-world video camera, thereby enabling the mixed reality 3D volumetric environment to be viewed interactively by the at least one final user.
|