US 11,887,241 B2
Learning 2D texture mapping in volumetric neural rendering
Zexiang Xu, San Jose, CA (US); Yannick Hold-Geoffroy, San Jose, CA (US); Milos Hasan, Lafayette, CA (US); Kalyan Sunkavalli, San Jose, CA (US); and Fanbo Xiang, San Diego, CA (US)
Assigned to Adobe Inc., San Jose, CA (US)
Filed by Adobe Inc., San Jose, CA (US)
Filed on Dec. 22, 2021, as Appl. No. 17/559,867.
Claims priority of provisional application 63/130,319, filed on Dec. 23, 2020.
Prior Publication US 2022/0198738 A1, Jun. 23, 2022
Int. Cl. G06T 15/04 (2011.01); G06T 15/20 (2011.01); G06N 3/08 (2023.01); G06T 19/20 (2011.01); G06N 3/045 (2023.01)
CPC G06T 15/04 (2013.01) [G06N 3/045 (2023.01); G06N 3/08 (2013.01); G06T 15/20 (2013.01); G06T 19/20 (2013.01)] 18 Claims
OG exemplary drawing
 
1. A computer-implemented method comprising:
obtaining a plurality of images of a scene depicting at least one object;
determining a volume density of the scene using a scene geometry network to generate a 3D geometric representation of the object;
mapping 3D points of the scene to a 2D texture space using a texture mapping network, wherein the texture mapping network is trained on a cycle loss using an inverse texture mapping network that maps back from the 2D texture space to 3D points to train the texture mapping network and the inverse texture mapping network to enforce a cycle mapping between the 2D texture space and points on a surface of the scene; and
determining radiance values for each 2D point in the 2D texture space from a plurality of viewpoints using a texture network to generate a 3D appearance representation of the object.