US 12,464,157 B2
Method and apparatus for inter prediction using motion vector candidate based on temporal motion prediction
Sung Chang Lim, Daejeon-si (KR); Hui Yong Kim, Daejeon-si (KR); Se Yoon Jeong, Daejeon-si (KR); Suk Hee Cho, Daejeon-si (KR); Jong Ho Kim, Daejeon-si (KR); Ha Hyun Lee, Seoul (KR); Jin Ho Lee, Daejeon-si (KR); Jin Soo Choi, Daejeon-si (KR); Jin Woong Kim, Daejeon-si (KR); and Chie Teuk Ahn, Daejeon-si (KR)
Assigned to Electronics and Telecommunications Research Institute, Daejeon (KR)
Filed by Electronics and Telecommunications Research Institute, Daejeon (KR)
Filed on Jun. 18, 2024, as Appl. No. 18/746,655.
Application 18/746,655 is a continuation of application No. 18/347,483, filed on Jul. 5, 2023, granted, now 12,063,385.
Application 18/347,483 is a continuation of application No. 17/525,053, filed on Nov. 12, 2021, granted, now 11,743,486, issued on Aug. 29, 2023.
Application 17/525,053 is a continuation of application No. 16/887,370, filed on May 29, 2020, granted, now 11,206,424, issued on Dec. 21, 2021.
Application 16/887,370 is a continuation of application No. 16/456,629, filed on Jun. 28, 2019, granted, now 10,708,614, issued on Jul. 7, 2020.
Application 16/456,629 is a continuation of application No. 13/989,126, granted, now 10,397,599, issued on Aug. 27, 2019, previously published as PCT/KR2011/009772, filed on Dec. 19, 2011.
Claims priority of application No. 10-2010-0130229 (KR), filed on Dec. 17, 2010; and application No. 10-2011-0137042 (KR), filed on Dec. 19, 2011.
Prior Publication US 2024/0340440 A1, Oct. 10, 2024
This patent is subject to a terminal disclaimer.
Int. Cl. H04N 23/60 (2023.01); G06V 20/00 (2022.01); H04N 7/18 (2006.01); H04N 19/52 (2014.01); H04N 23/61 (2023.01); H04N 23/63 (2023.01)
CPC H04N 19/52 (2014.11) 3 Claims
OG exemplary drawing
 
1. A video decoding method, comprising:
deriving spatial motion information from a neighboring unit of a decoding target unit;
obtaining index information relating to temporal motion information of the decoding target unit from a bitstream,
wherein the index information specifies a first reference picture having the temporal motion information of the target unit among a plurality of reference pictures in a reference picture list, and
wherein the neighboring unit is adjacent to the decoding target unit or is disposed at a corner of the decoding target unit;
selecting, based on the index information, the first reference picture among the plurality of the reference pictures in the reference picture list;
deriving the temporal motion information from a collocated unit of the selected first reference picture, the first reference picture having a different temporal order from a picture including the decoding target unit,
wherein the temporal motion information includes a motion vector of the collocated unit;
generating a merge candidate list for the decoding target unit including derived spatial motion information and the derived temporal motion information;
performing motion compensation on the decoding target unit by using the merge candidate list;
generating a prediction block of the decoding target unit using a result of the motion compensation;
obtaining a transform coefficient for the decoding target unit;
obtaining a residual block of the decoding target unit based on the transform coefficient; and
generating a reconstructed block of the decoding target unit based on the prediction block and the residual block,
wherein the step of performing the motion compensation comprises:
obtaining a merge index of the decoding target unit;
selecting motion information indicated by the merge index among merge candidates included in the merge candidate list; and
performing the motion compensation for the decoding target unit using the selected motion information.