CPC H04N 19/10 (2014.11) [H04N 19/105 (2014.11); H04N 19/136 (2014.11); H04N 19/197 (2014.11); H04N 19/196 (2014.11); H04N 19/46 (2014.11); H04N 19/61 (2014.11)] | 3 Claims |
1. A method of decoding video information, the method comprising:
generating an occurrence probability of a split flag for a current coding unit based on a split depth value for one or more spatial neighboring blocks of the current coding unit,
wherein the split flag indicates whether a coding unit is split and the split depth value indicates a depth level of a coding unit according to the split flag, and
wherein the coding unit is not split when the split flag is equal to 0, and the coding unit is split into four coding units with half horizontal and vertical size of the coding unit when the split flag is equal to 1;
performing entropy decoding of the split flag on the current coding unit based on the generated occurrence probability;
decoding a prediction flag indicating whether information of the current coding unit is the same as prediction information derived from information of a temporal neighboring block of the current coding unit, the information of the current coding unit including a motion vector of the current coding unit;
determining the information of the current coding unit based on the decoded prediction flag; and
performing inter-prediction of the current coding unit based on the determined information of the current coding unit,
wherein, when the information of the current coding unit is the same as the prediction information derived from information of the temporal neighboring block, the information of the current coding unit is determined to be the prediction information, and each of a first reference index and a second reference index for the current coding unit has a specific value indicating a specific reference frame of a current frame to which the current coding unit belongs, based on that two lists of reference frames are used for the current coding unit,
wherein the current coding unit is decoded based on a temporally previous frame and a temporally subsequent frame of the current frame, the temporally previous frame being indicated by the first reference index and the temporally subsequent frame being indicated by the second reference index, and
wherein, when the information of the current coding unit is different from the prediction information derived from information of the temporal neighboring block, the information of the current coding unit is derived by adding a difference value obtained from a bitstream to the prediction information.
|