| CPC G06F 21/566 (2013.01) [G06F 21/552 (2013.01); G06F 21/577 (2013.01); H04L 41/069 (2013.01); H04L 63/1408 (2013.01); H04L 63/1425 (2013.01); H04L 63/1433 (2013.01)] | 17 Claims |

|
1. A method for detection of a cyber-threat to a computer system, the method arranged to be performed by one or more processing apparatuses, the method comprising:
receiving input data that comprises data associated with a first entity related to activity on the computer system and data associated with a second entity;
deriving from the received input data metrics representative of characteristics of the received input data;
analyzing the derived metrics using a first self-learning model trained on a normal behavior of at least the first entity;
analyzing one or more causal links between data associated with the first entity and data associated with the second entity gathered over one or more days,
predicting an expected behavior of at least the first entity of the computing system based on the first self-learning model;
determining, in accordance with the analyzed derived metrics and the one or more causal links, a cyber-threat risk parameter indicative of a likelihood of the cyber-threat, wherein determining the cyber-threat risk parameter comprises comparing the analyzed, derived metrics with the predicted expected behavior, comparing whether parameters of the analyzed, derived metrics fall outside the parameters set by a threat parameter benchmark, and considering the one or more causal links that include a comparison between a behavior of the first entity based on analyzed, derived metrics associated with the first entity to a behavior of the second entity based on analyzed, derived metrics associated with the second entity, wherein the first entity is a first user and false positives are mitigated by at least considering unusual behavior by the first user as being normal behavior by the first user when similar unusual behavior is observed as being conducted by a second user, and
wherein the first self-learning model trained on the normal behavior of at least the first entity develops a pattern of life for the first entity based on data gathered regarding the first entity over time, where the pattern of life for the first entity is dynamically updated as more information is gathered over time of operation of the first self-learning model monitoring the first entity, where what is considered the normal behavior is used as a moving benchmark, allowing a threat detection system to spot behavior for the first entity that seems to fall outside of the normal behavior for the pattern of life, where the threat detection system flags the behavior for the first entity that seems to fall outside of the normal behavior for the pattern of life as anomalous, requiring further investigation, where the use of the first self-learning model trained on the normal behavior of at least the first entity to develop the pattern of life for the first entity based on data gathered regarding the first entity over time combined with the predicting of the expected behavior of the first entity of the computing system based on the first self-learning model trained on normal behavior produces the detection of the cyber-threat to the computer system, where the first self-learning model trained on the normal behavior of at least the first entity is specifically an unsupervised mathematical model used for detecting behavioural change.
|