US 12,423,913 B2
Invertible neural skinning
Menglei Chai, Los Angeles, CA (US); Riza Alp Guler, London (GB); Yash Mukund Kant, Toronto (CA); Jian Ren, Hermosa Beach, CA (US); Aliaksandr Siarohin, Los Angeles, CA (US); and Sergey Tulyakov, Santa Monica, CA (US)
Assigned to Snap Inc., Santa Monica, CA (US)
Filed by Menglei Chai, Los Angeles, CA (US); Riza Alp Guler, London (GB); Yash Mukund Kant, Toronto (CA); Jian Ren, Hermosa Beach, CA (US); Aliaksandr Siarohin, Los Angeles, CA (US); and Sergey Tulyakov, Santa Monica, CA (US)
Filed on Dec. 29, 2022, as Appl. No. 18/090,724.
Prior Publication US 2024/0221314 A1, Jul. 4, 2024
Int. Cl. G06T 17/20 (2006.01); G06F 3/01 (2006.01); G06N 3/04 (2023.01); G06T 13/20 (2011.01)
CPC G06T 17/20 (2013.01) [G06F 3/011 (2013.01); G06N 3/04 (2013.01); G06T 13/20 (2013.01)] 9 Claims
OG exemplary drawing
 
1. An invertible neural skinning (INS) pipeline for animating a three-dimensional (3D) mesh of a deformable object, comprising:
a first trained Pose-conditioned Invertible Neural Network (PIN) that obtains novel poses of the deformable object in a pose-dependent canonical space from a given pose of the deformable object defined by a generic set of bones and the 3D mesh, wherein the first trained PIN comprises an invertible transformation algorithm that provides the novel poses of the deformable object in the pose-dependent canonical space from the given pose provided as input during training;
a differentiable Linear Blend Skinning (LBS) neural network that transforms points in the pose-dependent canonical space to deformed points in novel poses of the deformable object;
a second trained PIN that maps canonical points of the deformable object in the pose-dependent canonical space to canonical points in a pose-independent canonical space; and
a canonical occupancy network or a neural network that receives the canonical points of the deformable object in the pose-independent canonical space,
wherein the given pose of the deformable object is animated, via skeletal bone articulation with the generic set of bones, by extracting a mesh of the deformable object from the canonical occupancy network or the neural network to obtain poses of the deformable object in pose-independent canonical space and reposing mesh vertices of the extracted mesh of the deformable object using the generic set of bones via an inverse pass of the INS pipeline, whereby canonical points in the pose-independent canonical space are mapped by the second trained PIN to pose correspondences of points in pose-dependent canonical space that are applied to the trained differentiable LBS neural network to obtain novel poses of the deformable object that are transformed by the first trained PIN for display as an animated deformable object.