CPC H04N 23/698 (2023.01) [G06T 3/40 (2013.01); H04N 11/28 (2019.01); H04N 19/103 (2014.11); H04N 19/11 (2014.11); H04N 19/119 (2014.11); H04N 19/124 (2014.11); H04N 19/129 (2014.11); H04N 19/13 (2014.11); H04N 19/134 (2014.11); H04N 19/159 (2014.11); H04N 19/174 (2014.11); H04N 19/176 (2014.11); H04N 19/186 (2014.11); H04N 19/30 (2014.11); H04N 19/31 (2014.11); H04N 19/33 (2014.11); H04N 19/44 (2014.11); H04N 19/503 (2014.11); H04N 19/51 (2014.11); H04N 19/625 (2014.11); H04N 19/45 (2014.11)] | 7 Claims |
1. A method for decoding a 360-degree image, the method comprising:
receiving a bitstream in which the 360-degree image is encoded, the bitstream including data of an extended 2-dimensional image, the extended 2-dimensional image including a 2-dimensional image and a predetermined extension region, and the 2-dimensional image being from an image with a 3-dimensional projection structure and including one or more faces;
generating a prediction image by referring to syntax information obtained from the received bitstream;
obtaining a decoded image by adding the generated prediction image to a residual image, the residual image being obtained by inverse-quantizing and inverse transforming quantized transform coefficients from the bitstream; and
reconstructing the decoded image into the 360-degree image according to a projection format,
wherein the projection format is selectively determined based on identification information, among a plurality of pre-defined projection formats including an ERP format in which the 360-degree image is projected in a two-dimensional plane or a CMP format in which the 360-degree image is projected in a cube,
wherein a size of the extension region is variably determined based on at least one of first information indicating a width of a left side of the extension region or second information indicating a width of a right side of the extension region, independently of a size of the 2-dimensional image,
wherein the extension region is not included in the image with the 3-dimensional projection structure,
wherein at least one of the identification information, the first information or the second information is obtained from the bitstream, and
wherein the prediction image is generated by referring to at least one neighboring sample adjacent to a decoding target block.
|