US 11,893,670 B2
Animation generation method, apparatus and system, and storage medium
Jinxiang Chai, Shanghai (CN); Wenping Zhao, Shanghai (CN); Shihao Jin, Shanghai (CN); Bo Liu, Shanghai (CN); Tonghui Zhu, Shanghai (CN); Hongbing Tan, Shanghai (CN); Xingtang Xiong, Shanghai (CN); Congyi Wang, Shanghai (CN); and Zhiyong Wang, Shanghai (CN)
Assigned to Mofa (Shanghai) Information Technology Co., Ltd., Shanghai (CN); and Shanghai Movu Technology Co., Ltd., Shanghai (CN)
Appl. No. 18/028,472
Filed by MOFA (SHANGHAI) INFORMATION TECHNOLOGY CO., LTD., Shanghai (CN); and SHANGHAI MOVU TECHNOLOGY CO., LTD., Shanghai (CN)
PCT Filed Aug. 3, 2021, PCT No. PCT/CN2021/110349
§ 371(c)(1), (2) Date Mar. 24, 2023,
PCT Pub. No. WO2022/062680, PCT Pub. Date Mar. 31, 2022.
Claims priority of application No. 202011023780.3 (CN), filed on Sep. 25, 2020.
Prior Publication US 2023/0274484 A1, Aug. 31, 2023
Int. Cl. G06T 13/00 (2011.01); G06T 7/73 (2017.01); G06T 7/246 (2017.01); G06V 40/16 (2022.01); G06V 20/40 (2022.01); G06V 40/20 (2022.01)
CPC G06T 13/00 (2013.01) [G06T 7/248 (2017.01); G06T 7/74 (2017.01); G06V 20/46 (2022.01); G06V 40/174 (2022.01); G06V 40/20 (2022.01); G06T 2207/10016 (2013.01); G06T 2207/30201 (2013.01); G06T 2207/30204 (2013.01)] 15 Claims
OG exemplary drawing
 
1. An animation generation method, comprising:
acquiring real feature data of a real object, wherein the real feature data comprises action data and face data of the real object during a performance process;
determining target feature data of a virtual character according to the real feature data, wherein the virtual character is a preset animation model, and the target feature data comprises action data and face data of the virtual character; and
generating an animation of the virtual character according to the target feature data;
wherein determining the target feature data of the virtual character according to the real feature data comprises:
converting the real feature data into virtual feature data of a virtual object, wherein the virtual object is a virtual model obtained by restoring and reconstructing the real object, and the virtual feature data comprises action data and face data of the virtual object; and
redirecting the virtual feature data to obtain the target feature data of the virtual character;
wherein redirecting the virtual feature data to obtain the target feature data of the virtual character comprises:
invoking a second preset face processing model, redirecting the face data of the virtual object to obtain the face data of the virtual character, wherein the face data comprises at least one of expression data or eye expression data;
wherein the second preset face processing model is a pre-trained neural network model configured to represent a correlation between the face data of the virtual object and the face data of the virtual character;
wherein the method further comprises:
acquiring reference data, wherein the reference data comprises at least one of voice recording data of the real object during the performance process or virtual camera position and attitude data of the real object during the performance process;
wherein acquiring reference data comprises:
during the performance process, synchronously capturing a virtual camera, and recording position and attitude and movement track of the virtual camera to obtain virtual camera position and attitude data;
wherein the virtual camera position and attitude data comprises a virtual camera position, a virtual camera direction, and a focal parameter of the virtual camera;
wherein the virtual camera position and attitude data is used for indicating a preview camera viewing angle of a to-be-generated animation image;
and wherein generating the animation of the virtual character according to the target feature data comprises:
generating the animation of the virtual character according to the target feature data and the reference data.