| CPC G06Q 30/0643 (2013.01) [G06Q 30/0629 (2013.01); G06T 11/00 (2013.01); G06T 19/20 (2013.01); G06V 10/22 (2022.01)] | 14 Claims |

|
1. A method comprising:
receiving, from a server, first two-dimensional image data representing a human body;
generating, using a first neural network, a three-dimensional virtual model of the human body based at least in part on the first two-dimensional image data representing the human body;
receiving, from a first database, second two-dimensional image data representing a first article of clothing and third two-dimensional image data representing a second article of clothing;
generating, using a second neural network, segment features of the first article of clothing based at least in part on the second two-dimensional image data representing the first article of clothing;
determining, using the second neural network, a first clothing type based on the segment features of the first article of clothing, wherein the first clothing type includes data relating to where the first article of clothing is worn;
generating, using the second neural network, segment features of the second article of clothing based at least in part on the third two-dimensional image data representing the second article of clothing;
determining, using the second neural network, a second clothing type based on the segment features of the second article of clothing, wherein the second clothing type includes data relating to where the second article of clothing is worn;
reposing, using the second neural network, the segment features of the first article of clothing based at least in part on the three-dimensional virtual model of the human body and the first clothing type;
reposing, using the second neural network, the segment features of the second article of clothing based at least in part on the three-dimensional virtual model of the human body and the second clothing type;
determining, using the second neural network and based on the first clothing type and the second clothing type, that the reposed segment features of the first article of clothing and the reposed segment features of the second article of clothing overlap and define an overlapping region;
determining, using the second neural network, clothing positioning comprising positioning of the first article of clothing and the second article of clothing, wherein the clothing positioning defines which pixels of the first article of clothing and which pixels the second article of clothing will be used as output pixels for the overlapping region;
generating, using the second neural network, based at least in part on the three-dimensional virtual model of the human body, the reposed segment features of the first article of clothing, the reposed segment features of the second article of clothing, and the clothing positioning, a layer mask indicating whether a plurality of output pixels of an output image should be produced according to the first two-dimensional image data representing the human body, the second two-dimensional image data representing the first article of clothing, or the third-two-dimensional image data representing the second article of clothing;
generating, using a third neural network, texture features of the human body based at least in part on the first two-dimensional image data representing the human body;
generating, using the third neural network, texture features of the first article of clothing based at least in part on the second two-dimensional image data representing the first article of clothing;
generating, using the third neural network, texture features of the second article of clothing based at least in part on the third two-dimensional image data representing the second article of clothing;
reposing, using the third neural network, the texture features of the first article of clothing based at least in part on the three-dimensional virtual model of the human body;
reposing, using the third neural network, the texture features of the second article of clothing based at least in part on the three-dimensional virtual model of the human body;
sampling, using the third neural network, the texture features of the human body, the reposed texture features of the first article of clothing, and the reposed texture features of the second article of clothing according to the layer mask; and
producing, using a fourth neural network, the plurality of output pixels of the output image based at least in part on the sampled texture features of the human body, the sampled reposed texture features of the first article of clothing, and the sampled reposed texture features of the second article of clothing, wherein the output image shows the first article of clothing and the second article of clothing on the three-dimensional virtual model of the human body in the first two-dimensional image data representing the human body.
|