US 12,229,692 B2
Systems and methods for coherent monitoring
Daniel Cervelli, Mountain View, CA (US); Anand Gupta, New York, NY (US); Andrew Elder, New York, NY (US); Robert Imig, Austin, TX (US); Praveen Ramalingam, Washington, DC (US); Reese Glidden, Washington, DC (US); and Matthew Fedderly, Baltimore, MD (US)
Assigned to Palantir Technologies Inc., Denver, CO (US)
Filed by Palantir Technologies Inc., Denver, CO (US)
Filed on Aug. 15, 2023, as Appl. No. 18/234,255.
Application 18/234,255 is a continuation of application No. 17/102,215, filed on Nov. 23, 2020, granted, now 11,727,317.
Application 17/102,215 is a continuation of application No. 16/565,256, filed on Sep. 9, 2019, granted, now 10,867,178, issued on Dec. 15, 2020.
Application 16/565,256 is a continuation of application No. 16/359,360, filed on Mar. 20, 2019, granted, now 10,452,913, issued on Oct. 22, 2019.
Claims priority of provisional application 62/799,292, filed on Jan. 31, 2019.
Prior Publication US 2023/0385710 A1, Nov. 30, 2023
This patent is subject to a terminal disclaimer.
Int. Cl. G06F 3/04842 (2022.01); G06F 18/24 (2023.01); G06Q 10/00 (2023.01); G06T 7/20 (2017.01); G06V 20/52 (2022.01); G06V 20/64 (2022.01)
CPC G06Q 10/00 (2013.01) [G06F 3/04842 (2013.01); G06F 18/24 (2023.01); G06T 7/20 (2013.01); G06V 20/52 (2022.01); G06V 20/64 (2022.01)] 20 Claims
OG exemplary drawing
 
1. A system for intelligently monitoring an environment, comprising:
one or more processors; and
a memory storing instructions that, when executed by the one or more processors, cause the system to:
obtain content representing an environment, the content comprising a plurality of frames, wherein the content comprises video content including an object captured from two angles;
identify, based on the content, one or more discrete objects observed within the environment;
track the object of the one or more discrete objects across the frames;
detect an event that deviates from one or more patterns;
in response to determining that the event deviates from the one or more patterns, flag the event;
generate a three-dimensional (3D) model from the two angles of the object according to a 3D reconstruction algorithm; and
augment a map of the environment with the generated 3D model.