US 12,073,180 B2
Computer implemented methods for the automated analysis or use of data, including use of a large language model
William Tunstall-Pedoe, Cambridgeshire (GB); Robert Heywood, Cambridgeshire (GB); Seth Warren, Cambridgeshire (GB); Paul Benn, Cambridgeshire (GB); Duncan Reynolds, Cambridgeshire (GB); Ayush Shah, Cambridgeshire (GB); Luci Krnic, Cambridgeshire (GB); and Ziyi Zhu, Cambridgeshire (GB)
Assigned to UNLIKELY ARTIFICIAL INTELLIGENCE LIMITED, Cambridgeshire (GB)
Filed by UNLIKELY ARTIFICIAL INTELLIGENCE LIMITED, Cambridgeshire (GB)
Filed on Apr. 17, 2023, as Appl. No. 18/301,594.
Application 18/301,594 is a continuation of application No. PCT/GB2023/050405, filed on Feb. 23, 2023.
Application 18/301,594 is a continuation of application No. 18/001,368, previously published as PCT/GB2021/052196, filed on Aug. 24, 2021.
Claims priority of application No. 2202347 (GB), filed on Feb. 22, 2022; application No. 2219268 (GB), filed on Dec. 20, 2022; application No. 2300624 (GB), filed on Jan. 16, 2023; and application No. 2302085 (GB), filed on Feb. 14, 2023.
Prior Publication US 2023/0259705 A1, Aug. 17, 2023
This patent is subject to a terminal disclaimer.
Int. Cl. G06F 40/279 (2020.01)
CPC G06F 40/279 (2020.01) 30 Claims
OG exemplary drawing
 
1. A computer-implemented method of fact-checking the output of a large language model (LLM), including the steps of:
(a) the LLM processing first input data to the LLM to generate first output from the LLM based on the first input data to the LLM;
(b) a processing system using a structured, machine-readable representation of data that conforms to a machine-readable language, in which semantic nodes are represented in the machine-readable language, the semantic nodes including semantic links between semantic nodes wherein the semantic links are themselves semantic nodes, in which each semantic node denotes one specific meaning, in which a combination of semantic nodes defines a semantic node, in which expressions in the machine-readable language are nestable, in which the first output from the LLM is represented in the machine-readable language, in which reasoning steps are represented in the machine-readable language to represent semantics of the reasoning steps, in which computation units are represented in the machine-readable language;
(c) providing the first output generated by the LLM to the processing system that uses the structured, machine-readable representation of data that conforms to the machine-readable language; and
(d) the processing system analyzing the first output generated by the LLM to fact-check the first output using the reasoning steps, the computation units and the semantic nodes, and to generate second output which is a fact-checked version of the first output and to provide the second output to a user.