US 12,223,819 B2
Building security and emergency detection and advisement system
Edward Michael Donegan, Denver, CO (US); William Delmonico, Burnsville, MN (US); and Joseph Schmitt, Princeton, MN (US)
Assigned to Tabor Mountain LLC, Wilmington, DE (US)
Filed by Tabor Mountain LLC, Wilmington, DE (US)
Filed on Nov. 14, 2023, as Appl. No. 18/508,386.
Application 18/508,386 is a continuation of application No. 18/075,905, filed on Dec. 6, 2022, granted, now 11,875,661.
Application 18/075,905 is a continuation of application No. 17/377,213, filed on Jul. 15, 2021, granted, now 11,626,002.
Prior Publication US 2024/0087441 A1, Mar. 14, 2024
This patent is subject to a terminal disclaimer.
Int. Cl. G08B 21/18 (2006.01); G08B 27/00 (2006.01); G10L 15/22 (2006.01)
CPC G08B 21/18 (2013.01) [G08B 27/005 (2013.01); G10L 15/22 (2013.01)] 20 Claims
OG exemplary drawing
 
1. A system for emergency detection and response, the system comprising:
a plurality of sensor devices positioned throughout a building; and
a computer system in network communication with the plurality of sensor devices, wherein the computer system is configured to:
continuously receive, from the plurality of sensor devices, signals generated by the plurality of sensor devices indicating conditions detected in the building;
detect an emergency in the building based on processing the continuously received signals using a machine learning model that was trained to correlate the continuously received signals with each other and with known types of emergencies;
generate visual output based on the detected emergency, wherein the visual output comprises augmented reality (AR) interaction with a user in the building;
transmit instructions for presenting the visual output to at least one sensor device of the plurality of sensor devices, wherein the at least one sensor device is configured to execute the instructions and present the visual output to the user in the building;
receive, from the at least one sensor device, audio input indicating a response of the user to the visual output;
generate updated visual output based on (i) the detected emergency and (ii) the audio input indicating the response of the user to the visual output; and
transmit instructions for presenting the updated visual output at the at least one sensor device.