US 12,294,735 B2
Encoder, decoder, encoding method, and decoding method
Kiyofumi Abe, Osaka (JP); Takahiro Nishi, Nara (JP); Tadamasa Toma, Osaka (JP); Ryuichi Kanoh, Osaka (JP); and Takashi Hashimoto, Hyogo (JP)
Assigned to PANASONIC INTELLECTUAL PROPERTY CORPORATION OF AMERICA, Torrance, CA (US)
Filed by Panasonic Intellectual Property Corporation of America, Torrance, CA (US)
Filed on Jun. 14, 2024, as Appl. No. 18/743,382.
Application 18/743,382 is a continuation of application No. 18/142,173, filed on May 2, 2023, granted, now 12,052,436.
Application 18/142,173 is a continuation of application No. 17/895,189, filed on Aug. 25, 2022, granted, now 11,677,975, issued on Jun. 13, 2023.
Application 17/895,189 is a continuation of application No. 17/342,076, filed on Jun. 8, 2021, granted, now 11,463,724, issued on Oct. 4, 2022.
Application 17/342,076 is a continuation of application No. 16/794,944, filed on Feb. 19, 2020, granted, now 11,064,216, issued on Jul. 13, 2021.
Application 16/794,944 is a continuation of application No. PCT/JP2018/034793, filed on Sep. 20, 2018.
Claims priority of provisional application 62/563,235, filed on Sep. 26, 2017.
Prior Publication US 2024/0333967 A1, Oct. 3, 2024
This patent is subject to a terminal disclaimer.
Int. Cl. H04N 19/52 (2014.01); H04N 19/124 (2014.01); H04N 19/159 (2014.01); H04N 19/176 (2014.01)
CPC H04N 19/52 (2014.11) [H04N 19/124 (2014.11); H04N 19/159 (2014.11); H04N 19/176 (2014.11)] 2 Claims
OG exemplary drawing
 
1. An encoding method, comprising:
determining whether an inter prediction mode is a merge mode; and
when the inter prediction mode is the merge mode:
deriving a first motion vector of a first current block to be processed, using a motion vector of a previous block which has been previously processed;
deriving a second motion vector of the first current block by performing motion estimation in vicinity of a position specified by the first motion vector;
generating a prediction image of the first current block by performing motion compensation using the second motion vector;
deriving a third motion vector of a second current block to be processed after the first current block, using the first motion vector of the first current block, when the second current block is included in a first picture including the first current block;
deriving a third motion vector of the second current block, using the second motion vector of the first current block, when the second current block is included in a second picture different from the first picture;
deriving a fourth motion vector of the second current block by performing motion estimation in vicinity of a position specified by the third motion vector; and
generating a prediction image of the second current block by performing motion compensation using the fourth motion vector.