US 11,721,071 B2
Methods and systems for producing content in multiple reality environments
James George, Brooklyn, NY (US); Alexander Porter, Brooklyn, NY (US); Timothy Scaffidi, Brooklyn, NY (US); Neil Purvey, Brooklyn, NY (US); and Patricia Shiu, Ridgewood, NY (US)
Assigned to SIMILE INC., Brooklyn, NY (US)
Filed by SIMILE INC., Brooklyn, NY (US)
Filed on Mar. 23, 2022, as Appl. No. 17/702,215.
Application 17/702,215 is a continuation of application No. 16/979,000, granted, now 11,288,864, previously published as PCT/US2019/021281, filed on Mar. 8, 2019.
Claims priority of provisional application 62/640,285, filed on Mar. 8, 2018.
Prior Publication US 2022/0215628 A1, Jul. 7, 2022
This patent is subject to a terminal disclaimer.
Int. Cl. G06T 17/20 (2006.01); G06T 15/04 (2011.01); G06T 15/08 (2011.01)
CPC G06T 17/20 (2013.01) [G06T 15/04 (2013.01); G06T 15/08 (2013.01)] 20 Claims
OG exemplary drawing
 
1. A method for producing a synthetic video image comprising:
receiving a plurality of video and depth inputs from a plurality of respective camera systems capturing a scene a respective plurality of different perspectives, wherein each video and depth input is captured by a respective camera system and includes a respective video stream of the scene captured from a respective perspective and a respective depth stream of the scene captured from the respective perspective;
for each respective video and depth input, generating a depth and color stream corresponding to the respective perspective of the video and depth input based on the video stream and the depth stream, wherein each respective depth and color stream includes i) a color image stream including a sequence of color images derived from the video stream of the video and depth input and ii) a refined depth image stream corresponding to the color image stream that includes a sequence of dense refined depth images that are refined by reprojecting depth images from the depth stream into respective color images of the video stream, wherein each dense refined depth image includes a grid of depth pixels that each indicate a respective depth value and respective color values derived from a corresponding color image;
generating a geometry video stream corresponding to the scene based on a plurality of depth and color streams respectively derived from the one or more color and depth input, wherein the geometry video stream includes a sequence of geometry frames, each geometry frame having embedded therein, a respective color image and a respective dense refined depth image from each of the plurality of depth and color streams;
generating a surface stream based on the geometry video stream in accordance with a surface reconstruction process, wherein the surface stream includes a geometry stream that defines a geometry of an object captured in the scene, and a texture stream that is time aligned with the geometry stream that defines a texture of a surface of the object; and
outputting the surface stream to a buffer and/or a renderer.