US 11,995,885 B2
Automated spatial indexing of images to video
Michael Ben Fleischman, San Francisco, CA (US); Philip DeCamp, Boulder, CO (US); and Jeevan James Kalanithi, San Francisco, CA (US)
Assigned to OPEN SPACE LABS, INC., San Francisco, CA (US)
Filed by Open Space Labs, Inc., San Francisco, CA (US)
Filed on Mar. 22, 2023, as Appl. No. 18/188,300.
Application 18/188,300 is a continuation of application No. 17/501,115, filed on Oct. 14, 2021, granted, now 11,638,001.
Application 17/501,115 is a continuation of application No. 17/151,004, filed on Jan. 15, 2021, granted, now 11,178,386, issued on Nov. 16, 2021.
Application 17/151,004 is a continuation of application No. 16/680,318, filed on Nov. 11, 2019, granted, now 10,944,959, issued on Mar. 9, 2021.
Claims priority of provisional application 62/759,945, filed on Nov. 12, 2018.
Prior Publication US 2023/0222784 A1, Jul. 13, 2023
This patent is subject to a terminal disclaimer.
Int. Cl. G06V 20/10 (2022.01); G06T 17/00 (2006.01); G06T 19/00 (2011.01); G06V 10/70 (2022.01); G06V 20/20 (2022.01); G06V 20/40 (2022.01); G06V 20/52 (2022.01); G06V 20/64 (2022.01); H04N 13/279 (2018.01); H04N 13/282 (2018.01); H04N 23/62 (2023.01); H04N 23/63 (2023.01); H04N 23/661 (2023.01)
CPC G06V 20/10 (2022.01) [G06T 17/00 (2013.01); G06T 19/003 (2013.01); G06V 10/70 (2022.01); G06V 20/20 (2022.01); G06V 20/52 (2022.01); G06V 20/64 (2022.01); H04N 13/279 (2018.05); H04N 13/282 (2018.05); H04N 23/62 (2023.01); H04N 23/631 (2023.01); H04N 23/661 (2023.01); G06T 2200/24 (2013.01); G06T 2210/04 (2013.01); G06T 2219/004 (2013.01); G06T 2219/024 (2013.01); G06V 20/44 (2022.01)] 20 Claims
OG exemplary drawing
 
1. A method comprising:
generating a three-dimensional rendering of an environment based at least in part on video captured by an image capture system as the image capture system moves through the environment;
modifying a displayed interface to include the three-dimensional rendering of the environment;
accessing one or more content annotations created within the environment at a set of locations within the environment; and
modifying the displayed interface to include the content annotations at locations within the three-dimensional rendering of the environment corresponding to the set of locations,
wherein a location of a content annotation is determined based on a mapping of an image timestamp captured by a system that created the content annotation to an image frame timestamp to which it is closest captured by the image capture system.