US 12,075,195 B2
Intelligent video surveillance system and method
Timothy Sulzer, Jenkintown, PA (US); Michael Lahiff, Jenkintown, PA (US); and Marcus Day, Jenkintown, PA (US)
Assigned to ZeroEyes, Inc., Conshohocken, PA (US)
Filed by ZeroEyes, Inc., Conshohocken, PA (US)
Filed on Aug. 11, 2023, as Appl. No. 18/233,073.
Application 18/233,073 is a continuation of application No. 17/714,941, filed on Apr. 6, 2022, granted, now 11,765,321.
Application 17/714,941 is a continuation of application No. 16/876,535, filed on May 18, 2020, granted, now 11,308,335, issued on Apr. 19, 2022.
Claims priority of provisional application 62/849,417, filed on May 17, 2019.
Prior Publication US 2023/0396737 A1, Dec. 7, 2023
This patent is subject to a terminal disclaimer.
Int. Cl. H04N 7/18 (2006.01); G06F 18/21 (2023.01); G06F 18/214 (2023.01); G06F 18/28 (2023.01); G06V 10/25 (2022.01); G06V 10/764 (2022.01); G06V 10/82 (2022.01); G06V 20/40 (2022.01); G06V 20/52 (2022.01)
CPC H04N 7/18 (2013.01) [G06F 18/2148 (2023.01); G06F 18/217 (2023.01); G06F 18/28 (2023.01); G06V 10/25 (2022.01); G06V 10/764 (2022.01); G06V 10/82 (2022.01); G06V 20/46 (2022.01); G06V 20/52 (2022.01); H04N 7/183 (2013.01)] 34 Claims
OG exemplary drawing
 
1. A method for training an object detection device, the method comprising:
receiving a video stream;
selecting a set of frames from the video stream;
detecting a presence of an object in one or more frames of the set of frames;
inserting bounding boxes in an area of the object in each of the one or more frames;
annotating the bounding boxes with one or more attributes of the object;
storing the one or more frames and the annotated bounding boxes in a database, the database configured to be searchable by at least one attribute of the one or more attributes;
training a detection model using the database, wherein the training includes varying a first parameter of the detection model;
analyzing a second set of frames from the video stream using the detection model to determine a number of true positive (“TP”) events and a number of false positive (“FP”) events, wherein the second set of frames that is different than the first set of frames;
creating a dataset of the analysis of the second set of frames;
filtering the dataset using a second parameter;
determining a ratio of TP events to FP events (“TP/FP”);
determining a ratio of FP events to TP events (“FP/TP”);
converting the FP/TP to a percentage (“% FP/TP); and
evaluating a performance of the detection model based at least in part on a ratio of % FP/TP to TP/FP.