US 12,153,884 B1
Advanced transformer architecture with epistemic embedding for enhanced natural language processing
Correy Allen Kowall, New Orleans, LA (US); Robert Donald Veglahn, Boston, MA (US); Nivedita Sivakumar, Richardson, TX (US); Jober't Aladwan, New Orleans, LA (US); and Mitchell Klein, New Orleans, LA (US)
Assigned to NOLA AI, Inc., New Orleans, LA (US)
Filed by NOLA AI, Inc., New Orleans, LA (US)
Filed on Aug. 9, 2024, as Appl. No. 18/799,635.
Claims priority of provisional application 63/518,556, filed on Aug. 9, 2023.
Int. Cl. G06F 40/30 (2020.01); G06F 40/284 (2020.01); G06N 20/00 (2019.01)
CPC G06F 40/284 (2020.01) [G06F 40/30 (2020.01); G06N 20/00 (2019.01)] 5 Claims
OG exemplary drawing
 
1. A system for performing Natural Language Processing (NLP) task, the system comprising:
a computer, comprising a processor, a memory, and a plurality of programming instructions, the plurality of programming instructions when executed by the processor cause the processor to:
send an input corpus and a prompt with NLP task to an Large Language Model (LLM), wherein the LLM comprises:
an input layer configured to create detailed addressing for words and sentences within the input corpus;
an embedding layer configured to:
generate epistemic embedding for the input corpus using a vignette tableau, wherein vignettes in the vignette tableau determine and manage the epistemic embedding, wherein
epistemic embeddings are indicative of user sentiment and epistemic evidence values;
combine the epistemic embedding, word embedding, metadata embedding, and speaker tag embedding to generate tokens with multiple vectors;
identify carrot positions in the input corpus for tokens with multiple vectors;
an output layer configured to:
receive tokens processed from a Multi-Headed Attention (MHA) System; and
receive tokens directly from the embedding layer; and
generate an output by reconstructing the input using tokens from the MHA and the embedding layer, wherein the output is presented on a graphical user interface;
wherein the carrot positions are indicators or markers in the input layer that signify external attention.