US 11,941,771 B2
Multi-dimensional model texture transfer
Kumar Abhinav, Hazaribag (IN); Alpana A. Dubey, Bangalore (IN); Suma Mani Kuriakose, Mumbai (IN); and Devasish Mahato, Jamshedpur (IN)
Assigned to Accenture Global Solutions Limited, Dublin (IE)
Filed by Accenture Global Solutions Limited, Dublin (IE)
Filed on Feb. 3, 2021, as Appl. No. 17/166,049.
Prior Publication US 2022/0245908 A1, Aug. 4, 2022
Int. Cl. G06T 19/20 (2011.01); G06T 17/10 (2006.01)
CPC G06T 19/20 (2013.01) [G06T 17/10 (2013.01); G06T 2210/56 (2013.01); G06T 2219/2024 (2013.01)] 27 Claims
OG exemplary drawing
 
1. A computer-implemented method for multi-dimensional texture transfer between digital models, the method comprising:
processing a content object model through a machine learning (ML) model to provide a set of base content feature representations as a concatenation of content structural feature vectors and content spatial feature vectors output from a first mesh convolution block of the ML model, the content object model comprising a three-dimensional representation of a content object;
processing a style object model through the ML model to provide sets of base style feature representations based on output of a spatial descriptor of the ML model, output of a structural descriptor of the ML model, output of the first mesh convolution block of the ML model, and output of a second mesh convolution block of the ML model, the style object model comprising a three-dimensional representation of a style object having a three-dimensional texture that is to be applied to the content object, wherein the three-dimensional texture is represented in the style object model as a first set of points in three-dimensional space, each point comprising an x-coordinate, a y-coordinate, and a z-coordinate;
initializing an initial stylized object model as the content object model, the initial stylized object model comprising a three-dimensional representation of a stylized object and being absent the three-dimensional texture of the style object;
executing two or more iterations of an iterative process, each of the two or more iterations comprising:
generating, by the ML model, sets of stylized feature representations for the initial stylized object model, the initial stylized object model having one or more adjusted parameters relative to a previous iteration, the one or more adjusted parameters at least partially comprising multi-dimensional points in a second set of points, each point in the second set of points comprising an x-coordinate, a y-coordinate, and a z-coordinate, and defining an extent of the stylized object in three-dimensional space to represent morphing of the stylized object relative to the previous iteration toward the three-dimensional texture of the style object, the one or more adjusted parameters being based on a delta value pushing toward optimization of a total loss across iterations,
determining the total loss based on the sets of stylized feature representations of the initial stylized object model, the set of base content feature representations of the content object model, and the sets of base style feature representations of the style object model, and
determining that the total loss is non-optimized, and in response, initiating a next iteration;
executing an iteration of the iterative process, the iteration comprising determining that the total loss is optimized, and in response providing the initial stylized object model as output of the iterative process; and
smoothing the initial stylized object model to provide a stylized object model representing the stylized object comprising at least a portion of content of the content object model and at least a portion of the three-dimensional texture of the style object model.