US 11,902,507 B2
Parameter derivation for intra prediction
Kai Zhang, San Diego, CA (US); Li Zhang, San Diego, CA (US); Hongbin Liu, Beijing (CN); Jizheng Xu, San Diego, CA (US); and Yue Wang, Beijing (CN)
Assigned to BEIJING BYTEDANCE NETWORK TECHNOLOGY CO., LTD, Beijing (CN); and BYTEDANCE INC., Los Angeles, CA (US)
Filed by Beijing Bytedance Network Technology Co., Ltd., Beijing (CN); and Bytedance Inc., Los Angeles, CA (US)
Filed on May 26, 2021, as Appl. No. 17/330,501.
Application 17/330,501 is a continuation of application No. PCT/CN2019/121850, filed on Nov. 29, 2019.
Claims priority of application No. PCT/CN2018/118799 (WO), filed on Dec. 1, 2018; application No. PCT/CN2018/119709 (WO), filed on Dec. 7, 2018; application No. PCT/CN2018/125412 (WO), filed on Dec. 29, 2018; application No. PCT/CN2019/070002 (WO), filed on Jan. 1, 2019; application No. PCT/CN2019/075874 (WO), filed on Feb. 22, 2019; application No. PCT/CN2019/075993 (WO), filed on Feb. 24, 2019; application No. PCT/CN2019/076195 (WO), filed on Feb. 26, 2019; application No. PCT/CN2019/079396 (WO), filed on Mar. 24, 2019; application No. PCT/CN2019/079431 (WO), filed on Mar. 25, 2019; and application No. PCT/CN2019/079769 (WO), filed on Mar. 26, 2019.
Prior Publication US 2021/0344902 A1, Nov. 4, 2021
Int. Cl. H04N 7/12 (2006.01); H04N 19/105 (2014.01); H04N 19/117 (2014.01); H04N 19/159 (2014.01); H04N 19/167 (2014.01); H04N 19/176 (2014.01); H04N 19/186 (2014.01); H04N 19/189 (2014.01)
CPC H04N 19/105 (2014.11) [H04N 19/117 (2014.11); H04N 19/159 (2014.11); H04N 19/167 (2014.11); H04N 19/176 (2014.11); H04N 19/186 (2014.11); H04N 19/189 (2014.11)] 20 Claims
OG exemplary drawing
 
1. A method of processing video data, comprising:
determining, for a conversion between a current video block of a video and a bitstream of the video, parameters for a linear model prediction or cross-color component prediction based on refined neighboring luma samples and chroma samples of the current video block;
deriving prediction values of a chroma component of the current video block based on the parameters and refined internal luma samples of the current video block; and
performing the conversion based on the prediction values;
wherein the refined neighboring luma samples and the refined internal luma samples of the current video block are determined by down-sampling neighboring luma samples and internal luma samples followed by a non-linear process;
wherein the parameters for the linear model prediction are α and β, wherein α=(C1−C0)/(L1−L0) and β=C0−αL0, wherein C0 and C1 are derived from neighboring chroma samples, and wherein L0 and L1 are derived from neighboring luma samples;
wherein C0 and L0 are based on S neighboring chroma and luma samples, denoted {Cx1, Cx2, . . . , CxS} and {Lx1, Lx2, . . . , LxS}, respectively, wherein C1 and L1 are based on T neighboring chroma and luma samples, denoted {Cy1, Cy2, . . . , CyT} and {Ly1, Ly2, . . . , LyT}, respectively, wherein S=T,
wherein {Cx1, Cx2, . . . , CxS} are corresponding to {Lx1, Lx2, . . . , LxS},
wherein {Cy1, Cy2, . . . , CyT} are corresponding to {Ly1, Ly2, . . . , LyT},
wherein C0=f0(Cx1, Cx2, . . . , CxS), L0=f1(Lx1, Lx2, . . . , LxS), C1=f2(Cy1, Cy2, . . . , CyT) and L1=f3(Ly1, Ly2, . . . , LyT), and
wherein f0, f1, f2 and f3 are functions.