US 11,055,892 B1
Systems and methods for generating a skull surface for computer animation
Byung Kuk Choi, Wellington (NZ)
Assigned to Weta Digital Limited, Wellington (NZ)
Filed by Weta Digital Limited, Wellington (NZ)
Filed on Jan. 20, 2021, as Appl. No. 17/153,733.
Application 17/153,733 is a continuation of application No. 17/079,078, filed on Oct. 23, 2020.
Claims priority of provisional application 63/084,184, filed on Sep. 28, 2020.
Claims priority of provisional application 63/080,468, filed on Sep. 18, 2020.
This patent is subject to a terminal disclaimer.
Int. Cl. G06T 13/40 (2011.01); G06T 17/00 (2006.01); G06F 16/53 (2019.01); G06T 7/50 (2017.01); G06N 3/08 (2006.01)
CPC G06T 13/40 (2013.01) [G06F 16/53 (2019.01); G06N 3/08 (2013.01); G06T 7/50 (2017.01); G06T 17/00 (2013.01); G06T 2200/24 (2013.01); G06T 2207/20081 (2013.01); G06T 2207/20084 (2013.01); G06T 2207/30201 (2013.01)] 18 Claims
OG exemplary drawing
 
1. A computer-implemented method for generating a skull surface depicting a skull of a live actor, the method comprising:
obtaining a plurality of facial scans of the live actor, each facial scan including a respective skin surface and respective sensing data indicative of facial muscle strains corresponding to a set of facial muscles, wherein the facial muscle strains cause deformations on the respective skin surface;
obtaining, from a database, a tissue depth dataset including a plurality of tissue depths corresponding to a plurality of tissue depth points on a human face, wherein each tissue depth indicates a distance from a corresponding tissue depth point on the human face to the skull underneath the human face;
determining, from various skin surfaces in the plurality of facial scans, a three-dimensional facial skin surface structure of the live actor;
generating, from the three-dimensional facial skin surface structure and the tissue depth dataset, the skull surface by offsetting corresponding tissue depth points on the three-dimensional facial skin structure for the plurality of tissue depths, respectively; and
refining the skull surface via a neural network by learning the plurality of facial scans and using muscle or skin parameters derived from the plurality of facial scans as ground truth labels.