CPC G06T 9/00 (2013.01) [G06T 17/00 (2013.01); H04N 19/52 (2014.11); H04N 19/56 (2014.11); H04N 19/597 (2014.11); G06T 2210/56 (2013.01)] | 20 Claims |
1. A computer-implemented method, comprising:
generating, by one or more processors, a segmentation of three-dimensional point cloud data of recorded media based on continuity data of the three-dimensional point cloud data;
projecting, by the one or more processors, a representation of the segmented three-dimensional point cloud data onto one or more sides of a three-dimensional bounding box, the representation of the segmented three-dimensional point cloud data being different based on a projected side of the three-dimensional bounding box;
in response to projecting the representation of the segmented three-dimensional point cloud data onto the one or more sides of the three-dimensional bounding box, generating, by the one or more processors, one or more patches based on the projected representation of the segmented three-dimensional point cloud data;
generating, by the one or more processors, a first frame of the one or more patches;
generating, by the one or more processors, first auxiliary information for the first frame;
generating, by the one or more processors, second auxiliary information for a reference frame, wherein the reference frame includes one or more second patches and the reference frame is a previously encoded and transmitted frame;
identifying, by the one or more processors, a first patch from the first frame that matches a second patch from the one or more second patches of the reference frame based on the first auxiliary information and the second auxiliary information;
generating, by the one or more processors, a motion vector candidate between the first patch and the second patch based on a difference between the first auxiliary information and the second auxiliary information; and
performing, by the one or more processors, motion compensation using the motion vector candidate.
|