US 12,347,218 B2
Salience-aware cross-attention for abstractive summarization
Kaiqiang Song, Bellevue, WA (US); Fei Wang, Palo Alto, CA (US); Xiaoyang Wang, Palo Alto, CA (US); Sangwoo Cho, Palo Alto, CA (US); and Dong Yu, Palo Alto, CA (US)
Assigned to TENCENT AMERICA LLC, Palo Alto, CA (US)
Filed by TENCENT AMERICA LLC, Palo Alto, CA (US)
Filed on Dec. 9, 2022, as Appl. No. 18/078,155.
Prior Publication US 2024/0193973 A1, Jun. 13, 2024
Int. Cl. G06V 30/18 (2022.01); G06F 40/284 (2020.01); G06V 30/164 (2022.01); G06V 30/19 (2022.01); G06V 30/262 (2022.01)
CPC G06V 30/18152 (2022.01) [G06F 40/284 (2020.01); G06V 30/164 (2022.01); G06V 30/19093 (2022.01); G06V 30/274 (2022.01)] 20 Claims
OG exemplary drawing
 
1. A method executed by at least one processor, the method comprising:
receiving an input comprising natural language texts at an encoder;
adding a token to the input;
obtaining a last-layer hidden state as a natural language text representation;
feeding the natural language text representation into a single-layer classification head;
predicting a salience allocation based on the single-layer classification head;
developing a salience-aware cross-attention (SACA) decoder to determine salience in the natural language text representation;
mapping a plurality of salience degrees to a plurality of trainable salience embeddings;
estimating an amount of signal to accept from the plurality of trainable salience embeddings;
incorporating the salience allocation and the signal in a cross-attention layer model; and
generating a summarization based on the SACA decoder and the cross-attention layer model.