| CPC G05D 1/0214 (2013.01) [G05D 1/0027 (2013.01); G05D 1/0094 (2013.01); G05D 1/0223 (2013.01)] | 20 Claims |

|
1. A method, comprising:
receiving position data at a first model operating on a node, wherein the first model is trained using a set of historical data that includes positional data and video data, wherein the node includes sensors configured to generate the position data at the node, the position data including time series data, wherein the position data determine a position of the node in an environment, a direction of node movement in the environment, an anticipated trajectory of the node, and a velocity of the node;
generating, by an object model, a set of cues from video data generated at the node or in the environment, wherein the set of cues includes information associated with the video data including one or more of color contrast, edge density, superpixel straddling, and number of edges, or combinations thereof;
determining, by the object model, an objectness score for the video data;
selecting first video frames from the video data that have objectness scores greater than a threshold objectness score and discarding second video frames from the video data that have objectness scores lower than the threshold objectness score;
generating an event, by a first model, based on a most recent position data and first video frames that correlate to the most recent position data;
providing the event to a pipeline;
making a decision by the pipeline based on the event generated by the first model;
performing the decision at the node and auditing the event based on the first video frames.
|