US 12,192,593 B2
Utilizing generative models for resynthesis of transition frames in clipped digital videos
Xiaojuan Wang, Bellevue, WA (US); Richard Zhang, San Francisco, CA (US); Taesung Park, Albany, CA (US); Yang Zhou, San Jose, CA (US); and Elya Shechtman, Seattle, WA (US)
Assigned to Adobe Inc., San Jose, CA (US)
Filed by Adobe Inc., San Jose, CA (US)
Filed on Feb. 3, 2023, as Appl. No. 18/164,348.
Prior Publication US 2024/0267597 A1, Aug. 8, 2024
Int. Cl. H04N 21/234 (2011.01); G06V 10/771 (2022.01); G06V 10/82 (2022.01); H04N 21/81 (2011.01)
CPC H04N 21/8153 (2013.01) [G06V 10/771 (2022.01); G06V 10/82 (2022.01); H04N 21/23424 (2013.01)] 20 Claims
OG exemplary drawing
 
1. A method comprising:
receiving a clipped digital video comprising a pre-cut frame prior to a gap in the clipped digital video and a post-cut frame following the gap in the clipped digital video;
generating a sequence of transition keypoint maps utilizing the pre-cut frame and the post-cut frame; and
generating, utilizing a generative neural network, a sequence of transition frames for the gap in the clipped digital video from the sequence of transition keypoint maps.