US 12,216,996 B2
Reasonable language model learning for text generation from a knowledge graph
Thanh Lam Hoang, Maynooth (IE); Dzung Tien Phan, Pleasantville, NY (US); Gabriele Picco, Dublin (IE); Lam Minh Nguyen, Ossining, NY (US); and Vanessa Lopez Garcia, Dublin (IE)
Assigned to International Business Machines Corporation, Armonk, NY (US)
Filed by INTERNATIONAL BUSINESS MACHINES CORPORATION, Armonk, NY (US)
Filed on Nov. 2, 2021, as Appl. No. 17/453,327.
Prior Publication US 2023/0134798 A1, May 4, 2023
Int. Cl. G06F 40/279 (2020.01); G06F 40/30 (2020.01); G06F 40/40 (2020.01); G06N 5/022 (2023.01)
CPC G06F 40/279 (2020.01) [G06F 40/30 (2020.01); G06F 40/40 (2020.01); G06N 5/022 (2013.01)] 20 Claims
OG exemplary drawing
 
1. A method for providing reasonable language model learning for text data in a computing system by a processor, comprising:
analyzing content from a plurality of data sources and a plurality of triples from a knowledge graph;
generating training data having a plurality of candidate labels derived from the analyzed content, each candidate label associated with a corresponding triple of the plurality of triples from the knowledge graph, the association of a first candidate label with a first triple based on semantic similarity between content from of a first data source and a keyword from the first triple;
training, by a convex continuous relation model, one or more reasonable language models based on the training data; and
generating text data by the trained reasonable language models using the plurality of triples.