| CPC G06T 19/20 (2013.01) [G06T 7/20 (2013.01); G06T 7/73 (2017.01); G06V 10/761 (2022.01); H04N 13/207 (2018.05); G06T 2207/20044 (2013.01); G06T 2207/30241 (2013.01); G06T 2219/2004 (2013.01)] | 20 Claims |

|
1. A data processing method, performed by an electronic device, the method comprising:
obtaining a target video of a target object, the target video comprising at least one frame of image;
determining three-dimensional attitude angles of target joint points of the target object in each frame of image, and first three-dimensional coordinates of a first joint point and a second joint point corresponding to each frame of image in a first coordinate system, the first joint point and the second joint point being among target joint points, the first coordinate system being a coordinate system corresponding to a virtual object, and the first joint point being a root node among the target joint points;
determining a displacement deviation of the second joint point according to the first three-dimensional coordinates of the second joint point corresponding to each frame of image and a historical three-dimensional coordinates corresponding to a previous frame of image relative to each frame of image;
correcting the first three-dimensional coordinates of the first joint point according to the first three-dimensional coordinates and the historical three-dimensional coordinates of the second joint point in response to the displacement deviation being less than or equal to a threshold, to obtain a target three-dimensional coordinates of the first joint point;
determining a filtered sliding window width of a two-dimensional attitude angle of a joint point according to a frame rate of the target video;
filtering an attitude angle sequence of each target joint point in each dimension according to the filtered sliding window width to obtain a filtered attitude angle sequence, the attitude angle sequence of a target joint point in one dimension comprising a two-dimensional attitude angle of the target joint point in each frame of image of the target video in the dimension;
obtaining a filtered three-dimensional attitude angle of each target joint point in each frame of image according to the filtered attitude angle sequence in each dimension; and
determining a three-dimensional attitude of the virtual object corresponding to each frame of image according to the target three-dimensional coordinates of the first joint point and the filtered three-dimensional attitude angles of the target joint points in each frame of image.
|