US 12,138,543 B1
Enhanced animation generation based on generative control
Wolfram Sebastian Starke, Edinburgh (GB); Yiwei Zhao, Sunnyvale, CA (US); Mohsen Sardari, Redwood City, CA (US); Harold Henry Chaput, Castro Valley, CA (US); Navid Aghdaie, San Jose, CA (US); and Kazi Atif-Uz Zaman, Foster City, CA (US)
Assigned to Electronic Arts Inc., Redwood City, CA (US)
Filed by Electronic Arts Inc., Redwood City, CA (US)
Filed on Jan. 20, 2021, as Appl. No. 17/248,336.
Claims priority of provisional application 62/963,970, filed on Jan. 21, 2020.
Int. Cl. A63F 13/57 (2014.01); G06N 3/04 (2023.01); G06N 3/08 (2023.01); G06T 13/40 (2011.01); G06T 13/80 (2011.01)
CPC A63F 13/57 (2014.09) [G06N 3/04 (2013.01); G06N 3/08 (2013.01); G06T 13/40 (2013.01); G06T 13/80 (2013.01); G06T 2200/24 (2013.01)] 17 Claims
OG exemplary drawing
 
1. A computer-implemented method, the method comprising:
accessing, via a system of one or one or more processors, an autoencoder trained based on character control information generated using motion capture data, the character control information indicating, at least, trajectory information associated with the motion capture data,
wherein the autoencoder is trained to reconstruct, via a latent feature space, the character control information, and output of the autoencoder is provided as a control signal to a motion prediction network;
obtaining, via the system during runtime of an electronic game, first character control information associated with a trajectory of an in-game character of the electronic game, wherein the first character control information is determined based on information derived during a window of frames generated by the electronic game, wherein user input is received from an input controller communicatively coupled to the electronic game;
generating, via the system, combined input from the first character control information and the user input, wherein the user input is mapped to character control information, and wherein the mapped user input perturbs the first character control information;
generating, via the system and based on the combined input via the autoencoder, a latent feature representation associated with the combined input and modifying the latent feature representation; and
outputting, via the autoencoder using the modified latent feature representation, a control signal to the motion prediction network for use in updating a character pose of the in-game character in a user interface associated with the electronic game.