US 12,411,945 B1
Large language model to detect and emulate malicious activity
Brendan Cruz Colon, Seattle, WA (US); Joshua Scott Hansen, Sahuarita, AZ (US); Christopher Miller, Seattle, WA (US); Matthew Michael Sommer, Issaquah, WA (US); Alexander Noble Adkins, Catlettsburg, KY (US); and Daniel Azuara, San Diego, CA (US)
Assigned to Amazon Technologies, Inc., Seattle, WA (US)
Filed by Amazon Technologies, Inc., Seattle, WA (US)
Filed on Dec. 16, 2022, as Appl. No. 18/083,357.
Int. Cl. G06F 21/62 (2013.01); G06F 21/55 (2013.01)
CPC G06F 21/554 (2013.01) [G06F 21/6218 (2013.01); G06F 2221/033 (2013.01)] 20 Claims
OG exemplary drawing
 
1. A method, comprising:
generating a set of training data, wherein a first training instance of the set of training data comprises a first plurality of messages between a first customer service agent and a first purported customer and a first label indicating whether one or more messages of the first plurality of messages are associated with malicious behavior;
training a large language model (LLM) using the set of training data to generate messages;
generating, by the LLM representing a second purported customer, a message associated with malicious behavior;
receiving a response message from a second customer service agent based on the message associated with malicious behavior; and
in response to the response message being an authorization, generating a feedback for the second customer service agent based on a number of responses before the response was an action.