US 11,947,917 B2
Natural language processing with an n-gram machine
Ni Lao, Belmont, CA (US); Jiazhong Nie, Mountain View, CA (US); and Fan Yang, Pittsburgh, PA (US)
Assigned to GOOGLE LLC, Mountain View, CA (US)
Filed by Google LLC, Mountain View, CA (US)
Filed on Feb. 15, 2022, as Appl. No. 17/672,364.
Application 17/672,364 is a continuation of application No. 16/069,781, granted, now 11,256,866, previously published as PCT/US2017/058229, filed on Oct. 25, 2017.
Prior Publication US 2022/0171942 A1, Jun. 2, 2022
This patent is subject to a terminal disclaimer.
Int. Cl. G06F 40/30 (2020.01); G06F 16/9032 (2019.01); G06N 3/044 (2023.01); G06N 3/045 (2023.01); G06N 5/022 (2023.01)
CPC G06F 40/30 (2020.01) [G06N 3/044 (2023.01); G06N 3/045 (2023.01); G06N 5/022 (2013.01); G06F 16/90332 (2019.01)] 19 Claims
OG exemplary drawing
 
1. A computing system, comprising:
at least one processor;
a machine-learned natural language processing model comprising:
an encoder model, wherein the encoder model is trained to receive a natural language text body and, in response to receipt of the natural language text body, generate a knowledge graph;
a decoder model, wherein the decoder model generates a reconstruction of the natural language text body based on the knowledge graph, wherein a reconstruction loss is calculated between the natural text body and the reconstruction of the natural text body, and wherein the encoder model and the decoder model are trained using an autoencoder objective function which describes the reconstruction loss; and
a programmer model, wherein the programmer model is trained to receive a natural language question, and, in response to receipt of the natural language question, output a program; and
at least one tangible, non-transitory computer-readable medium that stores instructions that, when executed by the at least one processor, cause the at least one processor to perform operations, the operations comprising:
receiving a natural language question;
receiving a natural language text body;
inputting the natural language text body into the encoder model;
generating, using the encoder model, the knowledge graph;
inputting the natural language question into the programmer model;
receiving, as an output of the programmer model, the program; and
executing the program on the knowledge graph to produce an answer to the natural language question.