US 12,444,138 B2
Rendering 3D captions within real-world environments
Kyle Goodrich, Venice, CA (US); Samuel Edward Hare, Los Angeles, CA (US); Maxim Maximov Lazarov, Culver City, CA (US); Tony Mathew, Irvine, CA (US); Andrew James McPhee, Culver City, CA (US); Daniel Moreno, New York, NY (US); and Wentao Shang, Los Angeles, CA (US)
Assigned to Snap Inc., Santa Monica, CA (US)
Filed by Snap Inc., Santa Monica, CA (US)
Filed on Jul. 3, 2024, as Appl. No. 18/763,468.
Application 18/763,468 is a continuation of application No. 18/073,280, filed on Dec. 1, 2022, granted, now 12,106,441.
Application 18/073,280 is a continuation of application No. 17/319,399, filed on May 13, 2021, granted, now 11,620,791.
Application 17/319,399 is a continuation of application No. 16/696,600, filed on Nov. 26, 2019, granted, now 11,210,850.
Claims priority of provisional application 62/775,713, filed on Dec. 5, 2018.
Claims priority of provisional application 62/771,964, filed on Nov. 27, 2018.
Prior Publication US 2024/0362873 A1, Oct. 31, 2024
This patent is subject to a terminal disclaimer.
Int. Cl. G06T 17/20 (2006.01); G06T 3/20 (2006.01); G06T 3/40 (2006.01); G06T 7/20 (2017.01); G06T 7/246 (2017.01); G06T 11/60 (2006.01); G06T 13/20 (2011.01); G06T 15/00 (2011.01); G06T 15/04 (2011.01); G06T 19/00 (2011.01); G06T 19/20 (2011.01)
CPC G06T 17/20 (2013.01) [G06T 3/20 (2013.01); G06T 3/40 (2013.01); G06T 7/20 (2013.01); G06T 7/251 (2017.01); G06T 11/60 (2013.01); G06T 13/20 (2013.01); G06T 15/00 (2013.01); G06T 15/04 (2013.01); G06T 19/006 (2013.01); G06T 19/20 (2013.01); G06F 2218/00 (2023.01); G06T 2219/2004 (2013.01); G06T 2219/2012 (2013.01); G06T 2219/2016 (2013.01)] 20 Claims
OG exemplary drawing
 
1. A system comprising:
at least one hardware processor; and
a memory storing instructions which, when executed by the at least one hardware processor, cause the at least one hardware processor to perform operations comprising:
receiving, via an interactive interface including a live view of a camera feed, a first input comprising one or more text characters;
detecting a first reference surface in a three-dimensional (3D) space captured within the live view of the camera feed;
rendering a 3D caption based on the one or more text characters at a first position in the 3D space captured within the live view of the camera feed based on the first reference surface;
receiving a second input to move the 3D caption in the 3D space captured within the live view of the camera feed;
detecting a second reference surface in the 3D space captured within the live view of the camera feed based on the second input;
rendering the 3D caption at a second position in the 3D space captured within the live view of the camera feed based on the second reference surface;
capturing one or more images from the live view of the camera feed; and
generating a message that includes the one or more images with the 3D caption rendered at the second position in the 3D space captured within the live view of the camera feed.