| CPC H04N 21/816 (2013.01) [G06F 3/011 (2013.01); G06F 3/017 (2013.01); G06T 15/00 (2013.01); G06T 15/20 (2013.01); G06V 40/28 (2022.01); H04N 21/2187 (2013.01)] | 17 Claims |

|
1. A video data generation method, applied to an electronic device, wherein the electronic device is configured to run a 3D rendering environment, the 3D rendering environment comprises 3D scene information, the 3D scene information is used to generate a 3D scene after rendering, the 3D scene information comprises at least one piece of virtual image information and at least one virtual lens, the virtual image information is used to generate a virtual image after rendering, the virtual image is driven by control information captured by an action capture device, and the method comprises:
obtaining hand control information of the virtual image, and driving, based on the hand control information, a hand of the virtual image to move relative to the 3D scene;
controlling a position of the at least one virtual lens to move along with movement of the hand of the virtual image, wherein a relative distance between the position of the at least one virtual lens and a position of the hand of the virtual image is within a first preset range; and
generating video data based on lens information of the at least one virtual lens and the 3D scene information;
wherein before the obtaining the hand control information of the virtual image, the method further comprises:
obtaining first control information of the virtual image, and driving the virtual image to perform a first corresponding action based on the first control information, wherein the first corresponding action refers to that an action performed by the virtual image is consistent with or conforms to the first control information; and
in response to the first control information and/or the first corresponding action meeting a first preset condition, binding the at least one virtual lens with the hand of the virtual image, so that the relative distance between the position of the at least one virtual lens and the position of the hand of the virtual image is within the first preset range;
wherein before the generating the video data based on the lens information of the at least one virtual lens and the 3D scene information, the method further comprises:
obtaining second control information of the virtual image, and driving the virtual image to perform a second corresponding action; and
in response to the second control information and/or the second corresponding action meeting a second preset condition, controlling to unbind the at least one virtual lens and the hand of the virtual image, and adjusting the at least one virtual lens to a state matching the second preset condition, wherein the first preset condition is different from the second preset condition.
|