CPC G06F 40/279 (2020.01) | 30 Claims |
1. A computer-implemented method of fact-checking the output of a large language model (LLM), including the steps of:
(a) the LLM processing first input data to the LLM to generate first output from the LLM based on the first input data to the LLM;
(b) a processing system using a structured, machine-readable representation of data that conforms to a machine-readable language, in which semantic nodes are represented in the machine-readable language, the semantic nodes including semantic links between semantic nodes wherein the semantic links are themselves semantic nodes, in which each semantic node denotes one specific meaning, in which a combination of semantic nodes defines a semantic node, in which expressions in the machine-readable language are nestable, in which the first output from the LLM is represented in the machine-readable language, in which reasoning steps are represented in the machine-readable language to represent semantics of the reasoning steps, in which computation units are represented in the machine-readable language;
(c) providing the first output generated by the LLM to the processing system that uses the structured, machine-readable representation of data that conforms to the machine-readable language; and
(d) the processing system analyzing the first output generated by the LLM to fact-check the first output using the reasoning steps, the computation units and the semantic nodes, and to generate second output which is a fact-checked version of the first output and to provide the second output to a user.
|