US 12,238,314 B2
Prediction using extra-buffer samples for intra block copy in video coding
Jizheng Xu, San Diego, CA (US); Li Zhang, San Diego, CA (US); Kai Zhang, San Diego, CA (US); Hongbin Liu, Beijing (CN); and Yue Wang, Beijing (CN)
Assigned to BEIJING BYTEDANCE NETWORK TECHNOLOGY CO., LTD., Beijing (CN); and BYTEDANCE INC., Los Angeles, CA (US)
Filed by Beijing Bytedance Network Technology Co., Ltd., Beijing (CN); and Bytedance Inc., Los Angeles, CA (US)
Filed on Jul. 27, 2021, as Appl. No. 17/386,510.
Application 17/386,510 is a continuation of application No. PCT/CN2020/074161, filed on Feb. 2, 2020.
Claims priority of application No. PCT/CN2019/074598 (WO), filed on Feb. 2, 2019; application No. PCT/CN2019/076695 (WO), filed on Mar. 1, 2019; application No. PCT/CN2019/076848 (WO), filed on Mar. 4, 2019; application No. PCT/CN2019/077725 (WO), filed on Mar. 11, 2019; application No. PCT/CN2019/079151 (WO), filed on Mar. 21, 2019; application No. PCT/CN2019/085862 (WO), filed on May 7, 2019; application No. PCT/CN2019/088129 (WO), filed on May 23, 2019; application No. PCT/CN2019/091691 (WO), filed on Jun. 18, 2019; application No. PCT/CN2019/093552 (WO), filed on Jun. 28, 2019; application No. PCT/CN2019/094957 (WO), filed on Jul. 6, 2019; application No. PCT/CN2019/095297 (WO), filed on Jul. 9, 2019; application No. PCT/CN2019/095504 (WO), filed on Jul. 10, 2019; application No. PCT/CN2019/095656 (WO), filed on Jul. 11, 2019; application No. PCT/CN2019/095913 (WO), filed on Jul. 13, 2019; and application No. PCT/CN2019/096048 (WO), filed on Jul. 15, 2019.
Prior Publication US 2021/0385437 A1, Dec. 9, 2021
Int. Cl. H04N 19/433 (2014.01); H04N 19/105 (2014.01); H04N 19/117 (2014.01); H04N 19/132 (2014.01); H04N 19/137 (2014.01); H04N 19/139 (2014.01); H04N 19/146 (2014.01); H04N 19/159 (2014.01); H04N 19/169 (2014.01); H04N 19/174 (2014.01); H04N 19/176 (2014.01); H04N 19/186 (2014.01); H04N 19/423 (2014.01); H04N 19/517 (2014.01); H04N 19/52 (2014.01); H04N 19/593 (2014.01); H04N 19/80 (2014.01); H04N 19/82 (2014.01); H04N 19/96 (2014.01); H04N 19/86 (2014.01)
CPC H04N 19/433 (2014.11) [H04N 19/105 (2014.11); H04N 19/132 (2014.11); H04N 19/137 (2014.11); H04N 19/139 (2014.11); H04N 19/146 (2014.11); H04N 19/159 (2014.11); H04N 19/174 (2014.11); H04N 19/176 (2014.11); H04N 19/186 (2014.11); H04N 19/1883 (2014.11); H04N 19/423 (2014.11); H04N 19/517 (2014.11); H04N 19/52 (2014.11); H04N 19/593 (2014.11); H04N 19/80 (2014.11); H04N 19/82 (2014.11); H04N 19/96 (2014.11); H04N 19/117 (2014.11); H04N 19/86 (2014.11)] 20 Claims
OG exemplary drawing
 
1. A method of processing video data, comprising:
determining, for a conversion between a current video block in a first video region of visual media data and a bitstream of the current video block, a virtual intra block copy (IBC) buffer that stores reference samples for prediction in an IBC mode, wherein the conversion is performed in the IBC mode which is based on motion information related to a reconstructed block located in the first video region without referring to a reference picture;
resetting the virtual IBC buffer before performing the conversion for the current video block;
generating reconstructing samples for a first video block of the first video region;
storing the reconstructing samples in the virtual IBC buffer without applying a filtering operation on the reconstructing samples;
for a first reference sample, of the reference samples, spatially located at location (x0, y0) of the current video block and having a block vector (BVx, BVy), computing a first reference location (P, Q), wherein the first reference location (P, Q) is determined using the block vector (BVx, BVy) and a location (x0, y0);
upon determining that the first reference location (P, Q) lies outside the virtual IBC buffer, re-computing a second reference sample of the reference samples;
deriving a block vector for the current video block; and
generating, based on the virtual IBC buffer and the block vector, prediction samples for the current video block,
wherein a virtual IBC buffer derived from the first video region is disabled for a second video region which is different from the first video region,
wherein the first video region and the second video region correspond to two different coding tree unit (CTU) rows, wherein the two different CTU rows are in a common picture, and
wherein the conversion includes encoding the current video block into the bitstream or decoding the current video block from the bitstream.