US 12,323,599 B2
Multi-models for intra prediction
Kai Zhang, San Diego, CA (US); Li Zhang, San Diego, CA (US); Hongbin Liu, Beijing (CN); Jizheng Xu, San Diego, CA (US); and Yue Wang, Beijing (CN)
Assigned to BEIJING BYTEDANCE NETWORK TECHNOLOGY CO., LTD., Beijing (CN); and BYTEDANCE INC., Los Angeles, CA (US)
Filed by Beijing Bytedance Network Technology Co., Ltd., Beijing (CN); and Bytedance Inc., Los Angeles, CA (US)
Filed on Jun. 30, 2023, as Appl. No. 18/345,608.
Application 18/345,608 is a continuation of application No. 17/246,821, filed on May 3, 2021.
Application 17/246,821 is a continuation of application No. PCT/CN2019/116015, filed on Nov. 6, 2019.
Claims priority of application No. PCT/CN2018/114158 (WO), filed on Nov. 6, 2018; application No. PCT/CN2018/118799 (WO), filed on Dec. 1, 2018; application No. PCT/CN2018/119709 (WO), filed on Dec. 7, 2018; application No. PCT/CN2018/125412 (WO), filed on Dec. 29, 2018; application No. PCT/CN2019/070002 (WO), filed on Jan. 1, 2019; application No. PCT/CN2019/075874 (WO), filed on Feb. 22, 2019; application No. PCT/CN2019/075993 (WO), filed on Feb. 24, 2019; application No. PCT/CN2019/076195 (WO), filed on Feb. 26, 2019; application No. PCT/CN2019/079396 (WO), filed on Mar. 24, 2019; application No. PCT/CN2019/079431 (WO), filed on Mar. 25, 2019; and application No. PCT/CN2019/079769 (WO), filed on Mar. 26, 2019.
Prior Publication US 2023/0345009 A1, Oct. 26, 2023
Int. Cl. H04N 7/12 (2006.01); H04N 19/105 (2014.01); H04N 19/11 (2014.01); H04N 19/132 (2014.01); H04N 19/149 (2014.01); H04N 19/159 (2014.01); H04N 19/176 (2014.01); H04N 19/184 (2014.01); H04N 19/186 (2014.01); H04N 19/189 (2014.01); H04N 19/30 (2014.01); H04N 19/42 (2014.01); H04N 19/50 (2014.01); H04N 19/593 (2014.01); H04N 19/70 (2014.01)
CPC H04N 19/149 (2014.11) [H04N 19/105 (2014.11); H04N 19/11 (2014.11); H04N 19/132 (2014.11); H04N 19/159 (2014.11); H04N 19/176 (2014.11); H04N 19/184 (2014.11); H04N 19/186 (2014.11); H04N 19/189 (2014.11); H04N 19/30 (2014.11); H04N 19/42 (2014.11); H04N 19/50 (2014.11); H04N 19/593 (2014.11); H04N 19/70 (2014.11)] 20 Claims
OG exemplary drawing
 
1. A method of processing video data, comprising:
determining, for a conversion between a current video block of a video that is a chroma block and a bitstream of the video, values of parameters of a cross-component linear model (CCLM) in a CCLM mode based on two chroma values denoted as minC and maxC and two luma values denoted as minY and maxY;
deriving, using the CCLM, predicted samples of the current video block based on reconstructed samples of a luma block corresponding to the current video block and the parameters of the CCLM; and
performing the conversion based on the predicted samples of the current video block,
wherein when maxY is not equal to minY, then one of the parameters is further determined based on a look-up table using an index derived based on a difference between maxY and minY,
wherein when maxY is equal to minY, then the predicted samples of the current video block are based on minC and not based on minY, maxY and maxC,
wherein maxY and minY are derived based on corresponding luma samples of selected chroma samples, wherein the selected chroma samples are selected from a group of neighboring chroma samples based on positions of the neighboring chroma samples and the CCLM mode of the current video block, wherein the positions of the neighboring chroma samples are derived based on distances from a top-left sample of the current video block to the neighboring chroma samples,
wherein minC and maxC are derived based on two or more chroma selected samples from the neighboring chroma samples of the current video block, wherein the two or more chroma selected samples are selected based on the CCLM mode of the current video block and availabilities of the neighboring chroma samples,
wherein the CCLM mode of the current video block is one of a first CCLM mode that derives the parameters of CCLM based on left neighboring chroma samples and above neighboring chroma samples, a second CCLM mode that derives the parameters of CCLM based on the left neighboring chroma samples and below-left neighboring samples, and a third CCLM mode that derives the parameters of CCLM based on the above neighboring chroma samples and above-right neighboring chroma sample, wherein a width and a height of the current video block is W and H, respectively,
wherein the two or more chroma samples are selected further based on the W and/or the H, and wherein exactly two chroma samples are selected from the above neighboring chroma samples in response to only the above neighboring chroma samples being available, the CCLM mode of the current video block being the first CCLM mode, and the W being equal to 2,
wherein positions of the two or more selected chroma selected samples derived based on a first position offset value (F) and a step value (S), and wherein F and S are derived at least based on availabilities of the neighboring chroma samples of the current video block and a dimension of the current video block,
wherein a Floor operation comprises F=Floor (M/2i), wherein M is a number of the neighboring chroma samples used to derive the selected chroma samples in horizontal direction, or F=Floor (N/2i), wherein N is a number of the neighboring chroma samples used to derive the selected chroma samples in vertical direction, i is equal to 2 or 3, and the Floor operation is used to obtain an integer part of a number, and
wherein a Max operation comprises S=Max(1, Floor (M/2j)), or S=Max(1, Floor (N/2j)), j is equal to 1 or 2, and the Max operation is used to obtain a maximum of multiple numbers.