Automated models aim to make input texts more readable. Such methods have the potential to make complex information accessible to a wider audience, e.g., providing access to recent medical literature which might otherwise be impenetrable for a lay reader. However, such models risk introducing errors into automatically simplified texts, for instance by inserting statements unsupported by the corresponding original text, or by omitting key information. Providing more readable but inaccurate versions of texts may in many cases be worse than providing no such access at all. The problem of factual accuracy (and the lack thereof) has received heightened attention in the context of summarization models, but the factuality of automatically simplified texts has not been investigated. We introduce a taxonomy of errors that we use to analyze both references drawn from standard simplification datasets and state-of-the-art model outputs. We find that errors often appear in both that are not captured by existing evaluation metrics, motivating a need for research into ensuring the factual accuracy of automated simplification models.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC9671157PMC
http://dx.doi.org/10.18653/v1/2022.acl-long.506DOI Listing

Publication Analysis

Top Keywords

providing access
8
automatically simplified
8
simplified texts
8
factual accuracy
8
evaluating factuality
4
factuality text
4
text simplification
4
simplification automated
4
models
4
automated models
4

Similar Publications

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!