| CPC H04N 13/271 (2018.05) [G06T 7/50 (2017.01); G06T 19/006 (2013.01); G06T 2207/10028 (2013.01)] | 15 Claims |

|
1. A method for encoding, implemented by at least one server that is communicably coupled to at least one display apparatus, the method comprising:
obtaining a real-world depth map corresponding to a viewpoint from a perspective of which a virtual-reality (VR) depth map has been generated;
for a given pixel in the VR depth map, finding an optical depth (D) of a corresponding pixel in the real-world depth map;
determining a lower bound (D−D1) for the given pixel, by subtracting a first predefined value (D1) from the optical depth (D) of the corresponding pixel in the real-world depth map;
determining an upper bound (D+D2) for the given pixel, by adding a second predefined value (D2) to the optical depth (D) of the corresponding pixel in the real-world depth map;
re-mapping an optical depth of the given pixel fetched from the VR depth map, from a scale of the lower bound to the upper bound determined for the given pixel to another scale of A to B, wherein A and B are scalars;
encoding re-mapped optical depths of pixels of the VR depth map into an encoded depth map; and
sending the encoded depth map to the at least one display apparatus.
|