CPC G06T 13/40 (2013.01) [G06T 7/20 (2013.01); G06T 7/70 (2017.01); H04N 5/272 (2013.01); G06T 2207/10016 (2013.01); G06T 2207/20084 (2013.01); G06T 2207/30196 (2013.01); G06T 2207/30221 (2013.01)] | 20 Claims |
1. A computer-implemented method for dynamically generating animation of characters from real life motion capture video, the method comprising:
accessing motion capture video, the motion capture video including a motion capture actor in motion;
inputting the motion capture video to a first neural network;
receiving pose information of the motion capture actor for a plurality of frames in the motion capture video from the first neural network;
overlaying the pose information on the motion capture video to generate a modified motion capture video;
identifying a first window of frames of the modified motion capture video, wherein the first window of frames comprises a current frame, one or more past frames to the current frame, and one or more future frames to the current frame;
inputting the first window of frames of the modified motion capture video to a second neural network, wherein the second neural network predicts the next frame from the current frame;
receiving, as output of the second neural network, a first predicted frame and a first local motion phase corresponding to the first predicted frame, wherein the first predicted frame comprises the predicted frame following the current frame;
identifying a second window of frames, wherein the second window of frames comprises the generated first predicted frame;
inputting the second window of frames and the first local motion phase to the second neural network; and
receiving, as output of the second neural network, a second predicted frame and a second local motion phase corresponding to the second predicted frame, wherein the second predicted frame comprises the predicted frame following the first predicted frame.
|