US 12,468,897 B2
Self-improving LLMs through consistency-based self-generated demonstrations
Ruoxi Sun, Santa Clara, CA (US); Xingchen Wan, Oxford (GB); Hanjun Dai, San Jose, CA (US); Sercan Omer Arik, San Francisco, CA (US); and Tomas Pfister, Redwood Shores, CA (US)
Assigned to Google LLC, Mountain View, CA (US)
Filed by Google LLC, Mountain View, CA (US)
Filed on Mar. 30, 2023, as Appl. No. 18/128,450.
Claims priority of provisional application 63/480,789, filed on Jan. 20, 2023.
Prior Publication US 2024/0249080 A1, Jul. 25, 2024
Int. Cl. G06F 40/40 (2020.01); G06F 16/334 (2025.01)
CPC G06F 40/40 (2020.01) [G06F 16/3344 (2019.01)] 19 Claims
OG exemplary drawing
 
1. A method for consistency based self-adaptive prompting, comprising:
generating, by one or more processors, a pool of demonstrations using a large language model (LLM) for a plurality of test queries by running chain-of-thought (CoT) over the plurality of test queries;
determining, by the one or more processors, a self-consistency score for respective demonstrations in the pool of demonstrations;
selecting, by the one or more processors, a set of demonstrations from the pool of demonstrations based on the self-consistency scores;
prepending, by the one or more processors, the set of demonstrations to the plurality of test queries; and
generating, by the one or more processors, a plurality of predictions based on the test queries prepended with the set of demonstrations using the LLM.