US 12,223,273 B2
Learned evaluation model for grading quality of natural language generation outputs
Thibault Sellam, New York City, NY (US); Dipanjan Das, Jersey City, NJ (US); and Ankur Parikh, New York City, NY (US)
Assigned to Google LLC, Mountain View, CA (US)
Filed by GOOGLE LLC, Mountain View, CA (US)
Filed on Sep. 25, 2023, as Appl. No. 18/473,386.
Application 18/473,386 is a continuation of application No. 18/079,148, filed on Dec. 12, 2022, granted, now 11,875,115.
Application 18/079,148 is a continuation of application No. 17/003,572, filed on Aug. 26, 2020, granted, now 11,551,002, issued on Jan. 10, 2023.
Prior Publication US 2024/0012999 A1, Jan. 11, 2024
This patent is subject to a terminal disclaimer.
Int. Cl. G06F 40/289 (2020.01); G06F 40/205 (2020.01); G06F 40/47 (2020.01); G06F 40/51 (2020.01)
CPC G06F 40/289 (2020.01) [G06F 40/205 (2020.01); G06F 40/47 (2020.01); G06F 40/51 (2020.01)] 20 Claims
OG exemplary drawing
 
1. A method of training a neural network, comprising:
generating, by one or more processors, a first training signal of a plurality of training signals based on whether a given synthetic sentence pair was generated using backtranslation; and
generating, by the one or more processors, one or more second training signals of the plurality of training signals based on a prediction from a textual entailment model regarding a likelihood that a modified passage of text of the given synthetic sentence pair entails or contradicts an original passage of text of the given synthetic sentence pair;
pretraining, by the one or more processors, the neural network to predict the plurality of training signals for the given synthetic sentence pair; and
fine-tuning, by the one or more processors, the neural network to predict a grade allocated to a graded sentence pair.