US 12,277,402 B2
End-to-end neural word alignment process of suggesting formatting in machine translations
Thomas Joachim Zenkel, Bamberg (DE); Joern Wuebker, Berlin (DE); and John Sturdy DeNero, Berkeley, CA (US)
Assigned to Lilt, Inc., Emeryville, CA (US)
Filed by Lilt, Inc., Emeryville, CA (US)
Filed on Aug. 22, 2023, as Appl. No. 18/453,427.
Application 18/453,427 is a continuation of application No. 17/245,888, filed on Apr. 30, 2021, granted, now 11,783,136.
Prior Publication US 2023/0394251 A1, Dec. 7, 2023
This patent is subject to a terminal disclaimer.
Int. Cl. G06F 40/58 (2020.01); G06F 40/284 (2020.01); G06N 3/045 (2023.01)
CPC G06F 40/58 (2020.01) [G06F 40/284 (2020.01); G06N 3/045 (2023.01)] 20 Claims
OG exemplary drawing
 
1. A computer-implemented method, comprising:
storing, in computer main memory, an encoding comprising a tokenized vocabulary of a source language and a target language;
training, using the encoding, a forward neural model programmed for translating from the source language to the target language, a forward alignment layer that is programmatically coupled to the forward neural model, a backward neural model for translating from the target language to the source language, and a backward alignment layer that is programmatically coupled to the backward neural model,
storing, in computer main memory, a pairing comprising a source representation of a source sentence associated with the source language and a corresponding target representation of a target sentence associated with the target language;
extracting, based on the source representation and the target representation, forward attention logits from the forward alignment layer and backward attention logits from the backward alignment layer;
programmatically inferring a symmetrized attention matrix that jointly optimizes the likelihood of the pairing under the forward neural model and the backward neural model; and
generating and digitally storing, based on the symmetrized attention matrix, a plurality of first hard alignments between source words comprising source tokens in the source sentence and target words comprising target tokens in the target sentence.