US 11,991,339 B2
Image data encoding/decoding method and apparatus
Ki Baek Kim, Seoul (KR)
Assigned to B1 INSTITUTE OF IMAGE TECHNOLOGY, INC., Seoul (KR)
Filed by B1 INSTITUTE OF IMAGE TECHNOLOGY, INC., Seoul (KR)
Filed on Nov. 29, 2023, as Appl. No. 18/523,847.
Application 18/523,847 is a continuation of application No. 17/073,225, filed on Oct. 16, 2020.
Application 17/073,225 is a continuation of application No. 16/372,251, filed on Apr. 1, 2019, abandoned.
Application 16/372,251 is a continuation of application No. PCT/KR2017/011144, filed on Oct. 10, 2017.
Claims priority of application No. 10-2016-0127883 (KR), filed on Oct. 4, 2016; application No. 10-2016-0129383 (KR), filed on Oct. 6, 2016; and application No. 10-2017-0090613 (KR), filed on Jul. 17, 2017.
Prior Publication US 2024/0098236 A1, Mar. 21, 2024
Int. Cl. H04N 5/20 (2006.01); G06T 3/40 (2006.01); H04N 13/161 (2018.01); H04N 19/103 (2014.01); H04N 19/105 (2014.01); H04N 19/11 (2014.01); H04N 19/119 (2014.01); H04N 19/124 (2014.01); H04N 19/129 (2014.01); H04N 19/13 (2014.01); H04N 19/134 (2014.01); H04N 19/167 (2014.01); H04N 19/17 (2014.01); H04N 19/172 (2014.01); H04N 19/176 (2014.01); H04N 19/597 (2014.01); H04N 19/625 (2014.01); H04N 19/70 (2014.01)
CPC H04N 13/161 (2018.05) [H04N 19/103 (2014.11); H04N 19/105 (2014.11); H04N 19/11 (2014.11); H04N 19/119 (2014.11); H04N 19/124 (2014.11); H04N 19/129 (2014.11); H04N 19/13 (2014.11); H04N 19/134 (2014.11); H04N 19/167 (2014.11); H04N 19/172 (2014.11); H04N 19/176 (2014.11); H04N 19/597 (2014.11); H04N 19/625 (2014.11); H04N 19/70 (2014.11)] 4 Claims
OG exemplary drawing
 
1. A method of decoding a 360-degree image, the method comprising:
receiving a bitstream in which the 360-degree image is encoded, the bitstream including data of an extended 2-dimensional image, the extended 2-dimensional image including a 2-dimensional image and a predetermined extension region, and the 2-dimensional image being projected from an image with a 3-dimensional projection structure and including one or more faces;
generating a predicted image by performing prediction based on information on the prediction included in the bitstream; and
reconstructing the extended 2-dimensional image based on the predicted image and a residual image,
wherein a size of the extension region is determined based on size information comprising one or more syntax elements obtained from the bitstream,
wherein the number of syntax elements is determined differently based on a projection format for the 3-dimensional projection structure, the projection format being one among a plurality of projection formats including an ERP format in which the 360-degree image is projected in a two-dimensional plane and a CMP format in which the 360-degree image is projected in a cube,
wherein the predicted image is added to the residual image to reconstruct the extended 2-dimensional image,
wherein whether the extended 2-dimensional image includes the extension region is determined based on information included in the bitstream,
wherein the 3-dimensional projection structure is selectively determined based on identification information, among a plurality of pre-defined 3-dimensional projection structures including a first 3-dimensional projection structure and a second 3-dimensional projection structure, the identification information being obtained from the bitstream,
wherein, in case of the first 3-dimensional projection structure, the size information on the size of the extension region comprises first width information for a left extension region on a left side of the 2-dimensional image and second width information for a right extension region on a right side of the 2-dimensional image,
wherein sample values of the extension region are determined differently according to a padding method selected from a plurality of padding methods including a first padding method which horizontally copies the sample values of boundary samples of the face to the sample values of the extension region and a second padding method where an image characteristic of the extension region changes from the image characteristic of the face, and
wherein sample values of the extension region are determined by horizontally copying the sample values of the face to the sample values of the extension region.