US 12,423,441 B2
Method for using generative large language models (LLM) for cybersecurity deception and honeypots
David Arthur McGrew, Poolesville, MD (US); Hugo Mike Latapie, Long Beach, CA (US); and Blake Anderson, Chapel Hill, NC (US)
Assigned to Cisco Technology, Inc., San Jose, CA (US)
Filed by Cisco Technology, Inc., San Jose, CA (US)
Filed on Dec. 21, 2023, as Appl. No. 18/393,487.
Claims priority of provisional application 63/493,552, filed on Mar. 31, 2023.
Prior Publication US 2024/0333765 A1, Oct. 3, 2024
Int. Cl. H04L 9/40 (2022.01); G06F 11/34 (2006.01); G06F 16/334 (2025.01); G06F 16/34 (2019.01); G06F 16/901 (2019.01); G06F 21/31 (2013.01); G06F 21/55 (2013.01); G06F 21/56 (2013.01); G06F 21/57 (2013.01)
CPC H04L 63/1433 (2013.01) [G06F 11/3476 (2013.01); G06F 16/334 (2019.01); G06F 16/345 (2019.01); G06F 16/9024 (2019.01); G06F 21/31 (2013.01); G06F 21/552 (2013.01); G06F 21/563 (2013.01); G06F 21/577 (2013.01); H04L 63/1425 (2013.01); H04L 63/145 (2013.01); H04L 63/1483 (2013.01); H04L 63/1491 (2013.01)] 20 Claims
OG exemplary drawing
 
1. A method for enhancing cybersecurity using Large Language Model (LLM)-generated honeypot schemes, the method comprising:
generating a plurality of deceptive information using an LLM, configured to attract and engage potential attackers, wherein the plurality of deceptive information comprises one or more characteristics referencing vulnerabilities of a network;
continuously monitoring for a first interaction initiated by an interacting party with one or more components of the generated deceptive information, wherein the first interaction is identified as a potential threat to the network;
in response to detection of an interaction identified as the potential threat, extracting interaction data associated with the interacting party retrieved during the first interaction; and
retraining the LLM with the interaction data to create more effective honeypot schemes.