Self-Repetition in Abstractive Neural Summarizers.

Proc Conf Assoc Comput Linguist Meet

Adobe Research, USA.

Published: November 2022

We provide a quantitative and qualitative analysis of self-repetition in the output of neural summarizers. We measure self-repetition as the number of -grams of length four or longer that appear in multiple outputs of the same system. We analyze the behavior of three popular architectures (BART, T5 and Pegasus), fine-tuned on five datasets. In a regression analysis, we find that the three architectures have different propensities for repeating content across output summaries for inputs, with BART being particularly prone to self-repetition. Fine-tuning on more abstractive data, and on data featuring formulaic language, is associated with a higher rate of self-repetition. In qualitative analysis we find systems produce artefacts such as ads and disclaimers unrelated to the content being summarized, as well as formulaic phrases common in the fine-tuning domain. Our approach to corpus level analysis of self-repetition may help practitioners clean up training data for summarizers and ultimately support methods for minimizing the amount of self-repetition.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC10361333PMC

Publication Analysis

Top Keywords

neural summarizers
8
qualitative analysis
8
analysis self-repetition
8
analysis find
8
self-repetition
7
self-repetition abstractive
4
abstractive neural
4
summarizers provide
4
provide quantitative
4
quantitative qualitative
4

Similar Publications

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!