US 12,457,331 B2
Video decoding method and apparatus using the same
Jin Ho Lee, Daejeon (KR); Jung Won Kang, Daejeon (KR); Ha Hyun Lee, Seoul (KR); Jin Soo Choi, Daejeon (KR); and Jin Woong Kim, Daejeon (KR)
Assigned to Electronics and Telecommunications Research Institute, Daejeon (KR)
Filed by Electronics and Telecommunications Research Institute, Daejeon (KR)
Filed on Nov. 1, 2023, as Appl. No. 18/499,965.
Application 18/499,965 is a continuation of application No. 17/532,442, filed on Nov. 22, 2021, granted, now 11,843,773.
Application 17/532,442 is a continuation of application No. 16/691,203, filed on Nov. 21, 2019, granted, now 11,212,526, issued on Dec. 28, 2021.
Application 16/691,203 is a continuation of application No. 16/013,419, filed on Jun. 20, 2018, granted, now 10,516,883, issued on Dec. 24, 2019.
Application 16/013,419 is a continuation of application No. 15/343,887, filed on Nov. 4, 2016, granted, now 10,027,959, issued on Jul. 17, 2018.
Application 15/343,887 is a continuation of application No. 14/326,232, filed on Jul. 8, 2014, granted, now 9,510,001, issued on Nov. 29, 2016.
Claims priority of application No. 10-2013-0080033 (KR), filed on Jul. 9, 2013; and application No. 10-2014-0066012 (KR), filed on May 30, 2014.
Prior Publication US 2024/0064299 A1, Feb. 22, 2024
Int. Cl. H04N 19/117 (2014.01); H04N 19/105 (2014.01); H04N 19/124 (2014.01); H04N 19/13 (2014.01); H04N 19/14 (2014.01); H04N 19/159 (2014.01); H04N 19/176 (2014.01); H04N 19/30 (2014.01); H04N 19/51 (2014.01); H04N 19/61 (2014.01); H04N 19/80 (2014.01); H04N 19/172 (2014.01)
CPC H04N 19/117 (2014.11) [H04N 19/105 (2014.11); H04N 19/124 (2014.11); H04N 19/13 (2014.11); H04N 19/14 (2014.11); H04N 19/159 (2014.11); H04N 19/176 (2014.11); H04N 19/30 (2014.11); H04N 19/51 (2014.11); H04N 19/61 (2014.11); H04N 19/80 (2014.11); H04N 19/172 (2014.11)] 5 Claims
OG exemplary drawing
 
1. A video decoding method supporting a plurality of layers, the method performed by a video decoding apparatus and comprising:
generating a reference picture list of a current slice in a current layer, the reference picture list including an inter-layer reference picture in a reference layer of the current layer;
generating a prediction block of a current block included in the current slice by referencing at least one of a plurality of reference pictures included in the reference picture list, motion information for generating the prediction block being derived from one of merge candidates of the current block; and
generating a reconstructed block of the current block included in the current slice using the prediction block,
wherein the inter-layer reference picture is marked as only a long-term reference picture and added to the reference picture list for inter-layer prediction,
wherein the prediction block of the current block is generated by applying an interpolation filter to the inter-layer reference picture in response to the inter-layer reference picture being determined to be referenced by the current block,
wherein a position of the inter-layer reference picture in the reference picture list is determined based on inter-layer reference picture information,
wherein the interpolation filter for the current block is one of a plurality of interpolation filter candidates,
wherein the interpolation filter for the current block is determined implicitly based on a scaling ratio for the inter-layer reference picture, and
wherein filter coefficients for the interpolation filter are determined differently depending on whether an integer-positioned pixel or a fractional-positioned pixel is generated.