US 12,254,545 B2
Generating modified digital images incorporating scene layout utilizing a swapping autoencoder
Taesung Park, Albany, CA (US); Alexei A Efros, Berkeley, CA (US); Elya Shechtman, Seattle, WA (US); Richard Zhang, San Francisco, CA (US); and Junyan Zhu, Cambridge, MA (US)
Assigned to Adobe Inc., San Jose, CA (US)
Filed by Adobe Inc., San Jose, CA (US)
Filed on Apr. 10, 2023, as Appl. No. 18/298,138.
Application 18/298,138 is a continuation of application No. 17/091,416, filed on Nov. 6, 2020, granted, now 11,625,875.
Prior Publication US 2023/0245363 A1, Aug. 3, 2023
This patent is subject to a terminal disclaimer.
Int. Cl. G06T 11/60 (2006.01); G06N 3/045 (2023.01); G06N 3/088 (2023.01); G06T 7/10 (2017.01)
CPC G06T 11/60 (2013.01) [G06N 3/045 (2023.01); G06N 3/088 (2013.01); G06T 7/10 (2017.01)] 20 Claims
OG exemplary drawing
 
1. A method comprising:
extracting a texture code from a digital image utilizing an encoder of a swapping autoencoder that includes the encoder and a generator neural network;
extracting a structure code from the digital image utilizing the encoder of the swapping autoencoder;
receiving a scene layout map defining semantic regions that indicate boundaries for semantically labeled image content and indicating content of a semantic label not present within the digital image; and
generating, utilizing the generator neural network of the swapping autoencoder to combine the texture code and the structure code as guided by the scene layout map, a modified digital image by:
arranging content of the digital image according to the boundaries for the semantically labeled image content from the scene layout map; and
generating, utilizing the generator neural network, pixels depicting content of the semantic label not present within the digital image.