US 11,743,486 B2
Method and apparatus for inter prediction using motion vector candidate based on temporal motion prediction
Sung Chang Lim, Daejeon (KR); Hui Yong Kim, Daejeon-si (KR); Se Yoon Jeong, Daejeon-si (KR); Suk Hee Cho, Daejeon-si (KR); Jong Ho Kim, Daejeon-si (KR); Ha Hyun Lee, Seoul (KR); Jin Ho Lee, Daejeon-si (KR); Jin Soo Choi, Daejeon-si (KR); Jin Woong Kim, Daejeon-si (KR); and Chie Teuk Ahn, Daejeon-si (KR)
Assigned to ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE, Daejeon (KR)
Filed by Electronics and Telecommunications Research Institute, Daejeon (KR)
Filed on Nov. 12, 2021, as Appl. No. 17/525,053.
Application 17/525,053 is a continuation of application No. 16/887,370, filed on May 29, 2020, granted, now 11,206,424.
Application 16/887,370 is a continuation of application No. 16/456,629, filed on Jun. 28, 2019, granted, now 10,708,614, issued on Jul. 7, 2020.
Application 16/456,629 is a continuation of application No. 13/989,126, granted, now 10,397,599, issued on Aug. 27, 2019, previously published as PCT/KR2011/009772, filed on Dec. 19, 2011.
Claims priority of application No. 10-2010-0130229 (KR), filed on Dec. 17, 2010; and application No. 10-2011-0137042 (KR), filed on Dec. 19, 2011.
Prior Publication US 2022/0078477 A1, Mar. 10, 2022
This patent is subject to a terminal disclaimer.
Int. Cl. H04N 19/52 (2014.01)
CPC H04N 19/52 (2014.11) 3 Claims
OG exemplary drawing
 
1. A video decoding method, comprising:
deriving spatial motion information from a neighboring unit of a decoding target unit;
obtaining index information relating to temporal motion information of the decoding target unit from a bitstream,
wherein the index information specifies a first reference picture having the temporal motion information of the target unit among a plurality of reference pictures in a reference picture list, and
wherein the neighboring unit is adjacent to the decoding target unit or is disposed at a corner of the decoding target unit;
selecting, based on the index information, the first reference picture among the plurality of the reference pictures in the reference picture list;
deriving the temporal motion information from a collocated unit of the selected first reference picture, the first reference picture having a different temporal order from a picture including the decoding target unit,
wherein the temporal motion information includes motion vector of the collocated unit;
generating a merge candidate list for the decoding target unit including derived spatial motion information and the derived temporal motion information;
performing motion compensation on the decoding target unit by using the merge candidate list,
generating a prediction block of the current block using a result of the motion compensation;
obtaining a residual block of the current block; and
generating a reconstructed block of the current block by adding the prediction block and the residual block,
wherein the step of performing the motion compensation comprises:
obtaining a merge index of the decoding target unit;
selecting motion information indicated by the merge index among merge candidates included in the merge candidate list; and
performing the motion compensation for the decoding target unit using the selected motion information.