US 11,659,202 B2
Position-dependent intra prediction sample filtering
Hongbin Liu, Beijing (CN); Li Zhang, San Diego, CA (US); Kai Zhang, San Diego, CA (US); Jizheng Xu, San Diego, CA (US); and Yue Wang, Beijing (CN)
Assigned to BEIJING BYTEDANCE NETWORK TECHNOLOGY CO., LTD, Beijing (CN); and BYTEDANCE INC., Los Angeles, CA (US)
Filed by Beijing Bytedance Network Technology Co., Ltd., Beijing (CN); and Bytedance Inc., Los Angeles, CA (US)
Filed on Jan. 28, 2022, as Appl. No. 17/587,768.
Application 17/587,768 is a continuation of application No. PCT/CN2020/109207, filed on Aug. 14, 2020.
Claims priority of application No. PCT/CN2019/100615 (WO), filed on Aug. 14, 2019; and application No. PCT/CN2019/117270 (WO), filed on Nov. 11, 2019.
Prior Publication US 2022/0150541 A1, May 12, 2022
Int. Cl. H04N 19/11 (2014.01); H04N 19/186 (2014.01); H04N 19/593 (2014.01); H04N 19/117 (2014.01); H04N 19/132 (2014.01); H04N 19/176 (2014.01)
CPC H04N 19/593 (2014.11) [H04N 19/11 (2014.11); H04N 19/117 (2014.11); H04N 19/132 (2014.11); H04N 19/176 (2014.11); H04N 19/186 (2014.11)] 20 Claims
OG exemplary drawing
 
1. A method of video processing, comprising:
determining, for a conversion between a current video block of a video and a bitstream of the video, whether a cross-component linear model (CCLM) mode is used for the current video block,
determining, for the current video block, whether a filtering process based on a position-dependent intra prediction is used for the current video block based on whether the CCLM mode is used for the current video block,
generating reconstructed samples for the current video block based on whether the filtering process is used for the current video block, and
performing the conversion according to the reconstructed samples,
wherein the filtering process combines neighboring samples with a prediction signal of the current video block to generate a modified prediction signal of the current video block,
wherein the CCLM mode uses a linear model to derive prediction values of a chroma component from another component,
wherein in the filtering process, the modified prediction signal of the current video block is generated based on (refL[x][y]*wL[x]+refT[x][y]*wT[y]+(64−wL[x]−wT[y])*predSamples[x][y]+32)>>6, where predSamples[x][y] denotes the prediction signal of the current video block,
wherein in a case that a variable of predModeIntra indicates a planar mode and in a case that the variable of predModeIntra indicates a DC mode, refL[x][y] is equal to p[−1][y], refT[x][y] is equal to p[x][−1], wT[y]=32>>((y<<1)>>nScale) and wL[x]=32>>((x<<1)>>nScale),
wherein in a case that the variable of predModeIntra indicates INTRA_ANGULAR18, refL[x][y]=p[−1][y]−p[−1][−1]+predSamples[x][y], refT[x][y]=p[x][−1]−p[−1][−1]+predSamples[x][y], wT[y]=32>>((y<<1)>>nScale), and wL[x]=0, and
wherein in a case that the variable of predModeIntra indicates INTRA_ANGULAR50, refL[x][y]=p[−1][y]−p[−1][−1]+predSamples[x][y], refT[x][y]=p[x][−1]−p[−1][−1]+predSamples[x][y], wT[y]=0, and wL[x]=32>>(x<<1)>>nScale;
wherein p[x][y] denotes neighbouring samples, and nScale is equal to (Log 2(nTbW)+Log 2(nTbH)−2)>>2, where nTbW denotes a width of the current video block, and nTbH denotes a height of the current video block, and
wherein the variable of predModeIntra specifies an intra prediction mode of the current video block.