US 12,086,965 B2
Image reprojection and multi-image inpainting based on geometric depth parameters
Yunhan Zhao, Irvine, CA (US); Connelly Barnes, Seattle, WA (US); Yuqian Zhou, Urbana, IL (US); Sohrab Amirghodsi, Seattle, WA (US); and Elya Shechtman, Seattle, WA (US)
Assigned to Adobe Inc., San Jose, CA (US)
Filed by Adobe Inc., San Jose, CA (US)
Filed on Nov. 5, 2021, as Appl. No. 17/520,361.
Prior Publication US 2023/0145498 A1, May 11, 2023
Int. Cl. G06T 5/77 (2024.01); G06T 3/18 (2024.01); G06T 3/4046 (2024.01); G06T 5/50 (2006.01); G06T 7/30 (2017.01); G06T 7/50 (2017.01); G06T 7/90 (2017.01)
CPC G06T 5/77 (2024.01) [G06T 3/18 (2024.01); G06T 3/4046 (2013.01); G06T 5/50 (2013.01); G06T 7/30 (2017.01); G06T 7/50 (2017.01); G06T 7/90 (2017.01); G06T 2207/20084 (2013.01); G06T 2207/20221 (2013.01)] 20 Claims
OG exemplary drawing
 
1. A non-transitory computer-readable medium comprising instructions that, when executed by at least one processor, cause a computing device to:
generate, utilizing a trained depth prediction network, a monocular depth prediction for a source image of an object or a scene;
determine a relative camera matrix between a target image of the object or the scene and the source image based on a plurality of matching correspondence points between the source image and the target image, wherein the source image differs from the target image;
determine a rescaled depth prediction based on the monocular depth prediction and the relative camera matrix; and
generate a reprojected image comprising at least a portion of the source image warped based on the rescaled depth prediction and the relative camera matrix to align with the target image.