US 11,748,957 B2
Generating 3D data in a messaging system
Kyle Goodrich, Venice, CA (US); Samuel Edward Hare, Los Angeles, CA (US); Maxim Maximov Lazarov, Culver City, CA (US); Tony Mathew, Los Angeles, CA (US); Andrew James McPhee, Culver City, CA (US); Daniel Moreno, Los Angeles, CA (US); Dhritiman Sagar, Marina del Rey, CA (US); and Wentao Shang, Los Angeles, CA (US)
Assigned to Snap Inc., Santa Monica, CA (US)
Filed by Snap Inc., Santa Monica, CA (US)
Filed on Nov. 12, 2021, as Appl. No. 17/525,612.
Application 17/525,612 is a continuation of application No. 17/006,438, filed on Aug. 28, 2020, granted, now 11,189,104.
Claims priority of provisional application 62/893,037, filed on Aug. 28, 2019.
Prior Publication US 2022/0284682 A1, Sep. 8, 2022
This patent is subject to a terminal disclaimer.
Int. Cl. G06T 19/00 (2011.01); G06T 7/194 (2017.01); G06T 7/50 (2017.01); H04L 67/131 (2022.01)
CPC G06T 19/006 (2013.01) [G06T 7/194 (2017.01); G06T 7/50 (2017.01); H04L 67/131 (2022.05); G06T 2207/10028 (2013.01); G06T 2207/30201 (2013.01)] 20 Claims
OG exemplary drawing
 
1. A method, comprising:
applying, using a processor, a three-dimensional (3D) effect to image data and depth data based at least in part on an augmented reality content generator, the applying the 3D effect comprising:
generating a depth map using at least the depth data,
generating a segmentation mask based at least on the image data, and
performing background inpainting and blurring of the image data using at least the segmentation mask to generate background inpainted image data;
generating a packed depth map based at least in part on the depth map; and
generating, using the processor, a message including information related to the applied 3D effect, the image data, and the depth data.