US 11,997,310 B2
Encoder, decoder, encoding method, and decoding method
Kiyofumi Abe, Osaka (JP); Takahiro Nishi, Nara (JP); Tadamasa Toma, Osaka (JP); and Ryuichi Kanoh, Osaka (JP)
Assigned to PANASONIC INTELLECTUAL PROPERTY CORPORATION OF AMERICA, Torrance, CA (US)
Filed by Panasonic Intellectual Property Corporation of America, Torrance, CA (US)
Filed on Feb. 14, 2023, as Appl. No. 18/109,435.
Application 18/109,435 is a continuation of application No. 17/473,479, filed on Sep. 13, 2021, granted, now 11,616,977.
Application 17/473,479 is a continuation of application No. 16/682,749, filed on Nov. 13, 2019, granted, now 11,146,811, issued on Oct. 12, 2021.
Application 16/682,749 is a continuation of application No. PCT/JP2018/018444, filed on May 14, 2018.
Claims priority of provisional application 62/508,517, filed on May 19, 2017.
Prior Publication US 2023/0217040 A1, Jul. 6, 2023
This patent is subject to a terminal disclaimer.
Int. Cl. H04N 19/537 (2014.01); H04N 19/105 (2014.01); H04N 19/157 (2014.01); H04N 19/52 (2014.01); H04N 19/573 (2014.01)
CPC H04N 19/537 (2014.11) [H04N 19/105 (2014.11); H04N 19/157 (2014.11); H04N 19/52 (2014.11); H04N 19/573 (2014.11)] 2 Claims
OG exemplary drawing
 
1. An encoding method comprising:
performing a first process to derive a first motion vector of a current block; and
generating a prediction image of the current block by referring to a spatial gradient of luminance generated by performing motion compensation using the first motion vector derived,
wherein the first motion vector is not encoded in a bitstream and the first motion vector is not modified per sub-block, the sub-block being obtained by splitting the current block,
a second process, which is the same as the first process, is performed in a decoder to derive the first motion vector, and
in the first process, evaluation values are calculated for each candidate for the first motion vector, and a motion vector having a better evaluation value than other evaluation values is determined as the first motion vector, each of the evaluation values corresponding to a difference between two regions in two reconstructed images.