| CPC H04N 19/52 (2014.11) [H04N 19/70 (2014.11)] | 15 Claims |

|
1. A method of video encoding, the method comprising:
selecting one of a plurality of reference pictures of a current picture as a collocated picture of the current picture, wherein the plurality of reference pictures includes a first list of reference pictures and a second list of reference pictures distinct from the first list of reference pictures;
determining a temporal vector between the collocated picture and the current picture, including:
checking reference pictures associated with a spatially neighbouring block of a current coding unit (CU) in the current picture according to a fixed order until a reference picture associated with the spatially neighbouring block is same as the collocated picture, and
choosing a motion vector of the spatially neighbouring block pointing to the reference picture as the temporal vector for the current CU;
splitting the current CU into a plurality of sub-CUs, each sub-CU corresponding to a respective subblock of the current picture;
obtaining a temporal motion vector predictor for each sub-CU of the current CU based on (i) the temporal vector between the collocated picture and the current picture and (ii) motion information of a block in the collocated picture that corresponds to the respective subblock of the current picture;
encoding the current CU according to temporal motion vector predictors of the plurality of sub-CUs of the current CU; and
generating a video bitstream including data associated with the encoded current picture, wherein the video bitstream includes a first syntax element and a second syntax element, wherein the first syntax element indicates whether the collocated picture is selected from the first list of reference pictures or the second list of reference pictures, and wherein the second syntax element indicates which reference picture in a list of reference pictures indicated by the first syntax element is the collocated picture,
wherein the obtaining a temporal motion vector predictor for each sub-CU of the current CU further comprises:
identifying a block in the collocated picture corresponding to a subblock of a sub-CU at or close to a center of the current CU in the current picture according to the temporal vector between the collocated picture and the current picture;
determining motion information of the identified block in the collocated picture; and
obtaining a motion vector from the determined motion information of the identified block as a default temporal motion vector predictor of a sub-CU of the current CU whose corresponding block in the collocated picture does not have motion information, wherein the corresponding block in the collocated picture at a same relative location as the subblock of the sub-CU in the current picture according to the temporal vector between the collocated picture and the current picture.
|