US 12,437,492 B2
Controllable dynamic appearance for neural 3D portraits
Zhixin Shu, San Jose, CA (US); Zexiang Xu, San Jose, CA (US); Shahrukh Athar, Stony Brook, NY (US); Sai Bi, San Jose, CA (US); Kalyan Sunkavalli, Saratoga, CA (US); and Fujun Luan, San Jose, CA (US)
Assigned to Adobe Inc., San Jose, CA (US)
Filed by Adobe Inc., San Jose, CA (US)
Filed on Apr. 7, 2023, as Appl. No. 18/132,272.
Prior Publication US 2024/0338915 A1, Oct. 10, 2024
Int. Cl. G06T 19/20 (2011.01); G06N 3/08 (2023.01); G06T 15/80 (2011.01); G06T 17/20 (2006.01)
CPC G06T 19/20 (2013.01) [G06N 3/08 (2013.01); G06T 15/80 (2013.01); G06T 17/20 (2013.01); G06T 2210/44 (2013.01); G06T 2219/2012 (2013.01); G06T 2219/2021 (2013.01)] 20 Claims
OG exemplary drawing
 
1. A method comprising:
projecting a color at a plurality of points in a digital video portrait based on a location, a surface normal, and a viewing direction for each respective point in a canonical space;
defining a neural radiance field as a continuous function that outputs color and density for the digital video portrait regardless of lighting;
providing, using the neural radiance field for the digital video portrait, a guided deformation field;
projecting, using the color and based on the guided deformation field, a dynamic face normal relative to a surface at each respective point of the plurality of points as changed relative to the surface normal based on an articulated head pose and facial expression in the digital video portrait;
disentangling, based on the dynamic face normal for each respective point of the plurality of points, a facial appearance in the digital video portrait into a plurality of intrinsic components in the canonical space; and
rendering at least a portion of a head pose as a controllable, neural three-dimensional portrait based on the digital video portrait using the plurality of intrinsic components.