US 11,676,342 B2
Providing 3D data for messages in a messaging system
Kyle Goodrich, Venice, CA (US); Samuel Edward Hare, Los Angeles, CA (US); Maxim Maximov Lazarov, Culver City, CA (US); Tony Mathew, Los Angeles, CA (US); Andrew James McPhee, Culver City, CA (US); Daniel Moreno, Los Angeles, CA (US); Dhritiman Sagar, Marina del Rey, CA (US); and Wentao Shang, Los Angeles, CA (US)
Assigned to Snap Inc., Santa Monica, CA (US)
Filed by Snap Inc., Santa Monica, CA (US)
Filed on Sep. 23, 2022, as Appl. No. 17/952,051.
Application 17/952,051 is a continuation of application No. 17/006,538, filed on Aug. 28, 2020, granted, now 11,488,359.
Claims priority of provisional application 62/893,050, filed on Aug. 28, 2019.
Prior Publication US 2023/0017627 A1, Jan. 19, 2023
This patent is subject to a terminal disclaimer.
Int. Cl. G06T 19/00 (2011.01); G06F 3/04842 (2022.01); G06T 7/50 (2017.01); H04L 51/42 (2022.01)
CPC G06T 19/00 (2013.01) [G06F 3/04842 (2013.01); G06T 7/50 (2017.01); H04L 51/42 (2022.05); G06T 2207/20084 (2013.01)] 20 Claims
OG exemplary drawing
 
11. A system comprising:
a processor; and
a memory including instructions that, when executed by the processor, cause the processor to perform operations comprising:
generating depth data using a machine learning model based at least in part on captured image data from at least one camera of a client device; and
applying, to the captured image data and the generated depth data, a 3D effect based at least in part on an augmented reality content generator, the applying the 3D effect comprising:
generating a depth map using at least the depth data,
generating a packed depth map based at least in part on the depth map, the generating the packed depth map comprising
converting a single channel floating point texture to a raw depth map, the raw depth map having a lower resolution than the captured image data, and
generating multiple channels based at least in part on the raw depth map,
generating a segmentation mask based at least on the captured image data, and
performing background inpainting and blurring of the captured image data using at least the segmentation mask to generate background inpainted image data, the performing the background inpainting comprising performing a diffusion based inpainting technique that fills in a missing region by propagating image content from a boundary between the missing region and a background region to an interior of the missing region, wherein the background region comprises a particular region of the captured image data without a foreground subject and the missing region includes the foreground subject.