| CPC G06F 21/563 (2013.01) [G06F 16/3347 (2019.01)] | 7 Claims |

|
1. A method for the efficient use of Large Language Models (LLMs) in malicious code detection, the method comprising:
assessing code and assigning a probability level of being malicious; and
running code assessed to be above a predetermined probability level through an LLM to determine if the code is malicious
wherein the assessing step employs a prompt Embedding mechanism to determine the probability level of maliciousness, the prompt embedding mechanism including:
generating a vector representation of the code being assessed,
comparing and clustering of the vector representation with a database of malicious prompt embeddings, and
assigning a probability value based on similarity to one or more of the malicious code embeddings;
wherein the probability value is used in calculating the probability level during the assessing step.
|