US 12,231,609 B2
Effects for 3D data in a messaging system
Kyle Goodrich, Venice, CA (US); Samuel Edward Hare, Los Angeles, CA (US); Maxim Maximov Lazarov, Culver City, CA (US); Tony Mathew, Irvine, CA (US); Andrew James McPhee, Culver City, CA (US); Daniel Moreno, New York, NY (US); Dhritiman Sagar, Marina del Rey, CA (US); and Wentao Shang, Los Angeles, CA (US)
Assigned to Snap Inc., Santa Monica, CA (US)
Filed by Snap Inc., Santa Monica, CA (US)
Filed on Oct. 18, 2023, as Appl. No. 18/489,688.
Application 18/489,688 is a continuation of application No. 17/950,761, filed on Sep. 22, 2022, granted, now 11,825,065.
Application 17/950,761 is a continuation of application No. 17/006,507, filed on Aug. 28, 2020, granted, now 11,457,196.
Claims priority of provisional application 62/893,048, filed on Aug. 28, 2019.
Prior Publication US 2024/0048678 A1, Feb. 8, 2024
This patent is subject to a terminal disclaimer.
Int. Cl. H04N 13/128 (2018.01); G06T 7/571 (2017.01); G06T 7/593 (2017.01); G06T 19/00 (2011.01); H04L 67/131 (2022.01); H04N 13/00 (2018.01); H04N 13/111 (2018.01); H04N 13/239 (2018.01)
CPC H04N 13/128 (2018.05) [G06T 7/571 (2017.01); G06T 7/593 (2017.01); G06T 19/006 (2013.01); H04L 67/131 (2022.05); H04N 13/111 (2018.05); H04N 13/239 (2018.05); G06T 2200/24 (2013.01); G06T 2207/30201 (2013.01); H04N 2013/0081 (2013.01)] 20 Claims
OG exemplary drawing
 
1. A method, comprising:
applying, to image data and depth data, a 3D effect based at least in part on an augmented reality content generator, the applying the 3D effect comprising:
generating a depth map using at least the depth data,
generating a segmentation mask based at least on the image data, and
performing background inpainting and blurring of the image data using at least the segmentation mask to generate background inpainted image data, the performing the background inpainting comprising performing a diffusion based inpainting technique that fills in a missing region by propagating image content from a boundary between the missing region and a background region to an interior of the missing region, wherein the background region comprises a particular region of the image data without a foreground subject and the missing region includes the foreground subject, generating the depth map comprises converting a single channel floating point texture into a raw depth map, and portions of the single channel floating point texture are sent into multiple lower precision channels; and
generating a 3D message based at least in part on the applied 3D effect.