| CPC H04N 13/111 (2018.05) [G06F 40/284 (2020.01); G06F 40/40 (2020.01); G06T 7/11 (2017.01); G06T 7/174 (2017.01); G06V 10/7715 (2022.01); G06V 20/20 (2022.01); G06V 20/64 (2022.01); G06V 40/171 (2022.01); H04N 13/398 (2018.05); G06T 2207/20221 (2013.01)] | 16 Claims |

|
1. A method for personalized image generation, comprising:
determining a facial description text specified by a user;
generating, based on the facial description text, a facial image by using a generative algorithm;
obtaining a first dressing effect image of a target garment, the first dressing effect image presenting a wearing effect of the target garment on a digital model; and
generating a second dressing effect image of the target garment by performing a fusion operation on the facial image and the first dressing effect image, the second dressing effect image presenting a wearing effect of the target garment on a fused digital model, wherein the performing a fusion operation on the facial image and the first dressing effect image includes:
extracting a facial feature vector from the facial image;
obtaining a dressing effect feature vector by parsing the first dressing effect image to identify a garment region and an exposed human body region that is not covered by the target garment and extracting the dressing effect feature vector from the first dressing effect image, the dressing effect feature vector at least including a human body feature vector corresponding to the exposed human body region;
obtaining a fusion result by performing feature fusion on the facial feature vector and the dressing effect feature vector, wherein the fusion result includes a latent space representation; and
obtaining the second dressing effect image by decoding the fusion result.
|