US 12,008,333 B2
Computer implemented methods for the automated analysis or use of data, including use of a large language model
William Tunstall-Pedoe, Cambridgeshire (GB); Robert Heywood, Cambridgeshire (GB); Seth Warren, Cambridgeshire (GB); Paul Benn, Cambridgeshire (GB); Duncan Reynolds, Cambridgeshire (GB); Ayush Shah, Cambridgeshire (GB); Luci Krnic, Cambridgeshire (GB); and Ziyi Zhu, Cambridgeshire (GB)
Assigned to UNLIKELY ARTIFICIAL INTELLIGENCE LIMITED, Cambridgeshire (GB)
Filed by UNLIKELY ARTIFICIAL INTELLIGENCE LIMITED, Cambridgeshire (GB)
Filed on Nov. 21, 2023, as Appl. No. 18/515,488.
Application 18/515,488 is a continuation of application No. 18/301,639, filed on Apr. 17, 2023.
Application 18/301,639 is a continuation of application No. PCT/GB2023/050405, filed on Feb. 22, 2023.
Application 18/301,639 is a continuation of application No. 18/001,368, previously published as PCT/GB2021/052196, filed on Aug. 24, 2021.
Claims priority of application No. 2202347 (GB), filed on Feb. 22, 2022; application No. 2219268 (GB), filed on Dec. 20, 2022; application No. 2300624 (GB), filed on Jan. 16, 2023; and application No. 2302085 (GB), filed on Feb. 14, 2023.
Prior Publication US 2024/0095468 A1, Mar. 21, 2024
Int. Cl. G06F 17/00 (2019.01); G06F 40/205 (2020.01); G06F 40/30 (2020.01); G06F 40/56 (2020.01)
CPC G06F 40/56 (2020.01) [G06F 40/205 (2020.01); G06F 40/30 (2020.01)] 23 Claims
OG exemplary drawing
 
1. A computer-implemented method of automatically removing hallucinations from natural language text generated by a large language model (LLM), including the steps of:
(a) providing a prompt or query to the LLM;
(b) automatically generating a baseline response to the prompt or query, the baseline response including factual assertions;
(c) automatically generating one or more verification questions to test the accuracy of one or more of the factual assertions for factual accuracy or inaccuracy;
(d) systematically answering the or each verification questions in a manner that is not dependent on the baseline response;
(e) using the answers to the or each verification question to identify one or more factual inaccuracies or hallucinations present in the baseline response;
(f) automatically using the or each answer to the verification question or questions to generate a final natural language output, in which one or more factual inaccuracies or hallucinations present in the baseline response, have been removed.