US 12,495,130 B2
Methods and systems for personalized image generation
Chen Liu, Hangzhou (CN); Huang Chen, Hangzhou (CN); Gaofeng He, Hangzhou (CN); and Huamin Wang, Hangzhou (CN)
Assigned to ZHEJIANG LINGDI DIGITAL TECHNOLOGY CO., LTD., Hangzhou (CN)
Filed by ZHEJIANG LINGDI DIGITAL TECHNOLOGY CO., LTD., Zhejiang (CN)
Filed on Jun. 4, 2025, as Appl. No. 19/227,533.
Application 19/227,533 is a continuation of application No. 18/976,318, filed on Dec. 10, 2024, granted, now 12,355,932.
Application 18/976,318 is a continuation in part of application No. PCT/CN2024/118044, filed on Sep. 10, 2024.
Claims priority of application No. 202310841791.X (CN), filed on Jul. 10, 2023; application No. 202311378004.9 (CN), filed on Oct. 23, 2023; application No. 202410787264.X (CN), filed on Jun. 18, 2024; and application No. 202410915615.0 (CN), filed on Jul. 9, 2024.
Prior Publication US 2025/0294126 A1, Sep. 18, 2025
This patent is subject to a terminal disclaimer.
Int. Cl. G06V 40/16 (2022.01); G06F 40/284 (2020.01); G06F 40/40 (2020.01); G06T 7/11 (2017.01); G06T 7/174 (2017.01); G06T 11/00 (2006.01); G06T 15/00 (2011.01); G06T 17/00 (2006.01); G06V 10/77 (2022.01); G06V 20/20 (2022.01); G06V 20/64 (2022.01); H04N 13/111 (2018.01); H04N 13/398 (2018.01)
CPC H04N 13/111 (2018.05) [G06F 40/284 (2020.01); G06F 40/40 (2020.01); G06T 7/11 (2017.01); G06T 7/174 (2017.01); G06V 10/7715 (2022.01); G06V 20/20 (2022.01); G06V 20/64 (2022.01); G06V 40/171 (2022.01); H04N 13/398 (2018.05); G06T 2207/20221 (2013.01)] 15 Claims
OG exemplary drawing
 
1. A method for personalized image generation, comprising:
obtaining a facial image;
obtaining a first effect image of a target garment, the first effect image including a wearing effect of the target garment on a first model; and
generating a second effect image of the target garment by performing a fusion operation on the facial image and the first effect image, the second effect image including a wearing effect of the target garment on a fused model;
wherein the performing a fusion operation on the facial image and the first effect image includes:
obtaining facial data and/or hair data of the first model from the first effect image;
extracting a first feature vector from the facial data and/or the hair data of the first model;
extracting a second feature vector by performing face recognition on the facial image; and
obtaining a fusion result by performing feature fusion on the first feature vector and the second feature vector.