CPC H04N 19/14 (2014.11) [G06T 3/18 (2024.01); G06T 7/50 (2017.01); G06T 7/586 (2017.01); G06T 7/593 (2017.01); H04L 65/70 (2022.05); H04N 19/132 (2014.11); H04N 19/172 (2014.11); H04N 19/25 (2014.11); G06T 15/06 (2013.01); H04N 13/161 (2018.05); H04N 13/194 (2018.05); H04N 13/302 (2018.05); H04N 19/182 (2014.11); H04N 19/184 (2014.11); H04N 19/597 (2014.11)] | 14 Claims |
1. A computer-implemented method comprising the steps of:
receiving a light field data set comprising a 3D description of a scene;
partitioning the light field data set into a plurality of scene decomposition layers, each of the plurality of scene decomposition layers having depth boundaries relative to a display screen, each of the scene decomposition layers comprising a plurality of pixels each comprising a color and a disparity value in the layer;
for each of the plurality of scene decomposition layers, encoding the layer by applying a disparity bit width to the layer and encoding each of the plurality of pixels in the layer using the disparity value and the disparity bit width in the layer to create a fixed point representation of the scene;
sampling each of the scene decomposition layers;
rendering each of the scene decomposition layers to produce a set of compressed data light field layers; and
executing a view synthesis protocol comprising instructions for a multi-stage reconstruction of a pixel array from at least one reference elemental image in each scene decomposition layer to decode the compressed data light field layers and construct a subset of a plurality of light fields.
|