US 12,223,566 B2
Method and device for synthesizing background and face by considering face shape and using deep learning network
Ji-Su Kang, Seoul (KR); and Tae-Min Choi, Daejeon (KR)
Assigned to KLLEON INC., Seoul (KR)
Appl. No. 18/009,990
Filed by KLLEON INC., Seoul (KR)
PCT Filed Jun. 7, 2022, PCT No. PCT/KR2022/007979
§ 371(c)(1), (2) Date Dec. 13, 2022,
PCT Pub. No. WO2022/260385, PCT Pub. Date Dec. 15, 2022.
Claims priority of application No. 10-2021-0073798 (KR), filed on Jun. 7, 2021.
Prior Publication US 2024/0249448 A1, Jul. 25, 2024
Int. Cl. G09G 5/02 (2006.01); G06T 5/60 (2024.01); G06T 7/194 (2017.01); G06T 11/00 (2006.01)
CPC G06T 11/001 (2013.01) [G06T 5/60 (2024.01); G06T 7/194 (2017.01); G06T 2207/20084 (2013.01); G06T 2207/30201 (2013.01)] 10 Claims
OG exemplary drawing
 
1. A method of synthesizing a background and a face by considering a face shape and using a deep learning network, the method comprising:
(a) receiving an input of an original image and a converted face image by a reception unit;
(b) removing a first central part comprising an original face in the original image from the original image to leave the background by a data preprocessing unit, the first central part being an internal image surrounded by boundaries at a preset distance from upper, lower, left, and right boundaries of the original image;
(c) correcting colors of the converted face image to link color information of the background of the original image and color information of a second central part comprising a converted face of the converted face image by the data preprocessing unit, the second central part being an internal image surrounded by boundaries at a preset distance from upper, lower, left, and right boundaries of the converted face image;
(d) leaving the second central part by removing the background from the converted face image, so as not to extract only the converted face from the converted face image by the data preprocessing unit;
(e) overlapping and converting the background of the original image and the second central part of the converted face image into six-channel data by a data combination unit;
(f) extracting a feature vector from the six-channel data to generate a three-channel composite image by a data restoration unit; and
(g) recompositing a central part of the composite image comprising the converted face with the background of the original image after removing the background of the composite image derived from the original image by an image post-processing unit.