US 11,995,749 B2
Rig-space neural rendering of digital assets
Dominik Borer, Zurich (CH); Jakob Buhmann, Zurich (CH); and Martin Guay, Zurich (CH)
Assigned to DISNEY ENTERPRISES, INC., Burbank, CA (US); and ETH Zürich (Eidgenössische Technische Hochschule Zürich), Zürich (CH)
Filed by DISNEY ENTERPRISES, INC., Burbank, CA (US); and ETH Zürich (Eidgenössische Technische Hochschule Zürich), Zürich (CH)
Filed on Mar. 5, 2020, as Appl. No. 16/810,792.
Claims priority of provisional application 62/965,163, filed on Jan. 23, 2020.
Prior Publication US 2021/0233300 A1, Jul. 29, 2021
Int. Cl. G06T 13/40 (2011.01); G06N 20/00 (2019.01)
CPC G06T 13/40 (2013.01) [G06N 20/00 (2019.01)] 21 Claims
OG exemplary drawing
 
1. A computer-implemented method for generating image data of a scene including a three-dimensional (3D) animatable asset, the method comprising:
accessing a machine learning model that has been trained via first image data of the 3D animatable asset generated by rendering movements of the 3D animatable asset based on first rig vector data that is associated with a plurality of poses of an animation rig usable to deform the 3D animatable asset via a plurality of control points included in the animation rig, wherein at least one of the movements of the 3D animatable asset is rendered from a plurality of virtual camera views;
receiving second rig vector data that includes a plurality of rig parameter values associated with the plurality of control points included in the animation rig; and
generating, via the machine learning model, second image data of the 3D animatable asset based on the second rig vector data, wherein generating the second image data comprises inputting a one-dimensional (1D) array of the plurality of rig parameter values included in the second rig vector data into the machine learning model that outputs the second image data.