Asking annotators to explain "why" they labeled an instance yields annotator rationales: natural language explanations that provide reasons for classifications. In this work, we survey the collection and use of annotator rationales. Human-annotated rationales can improve data quality and form a valuable resource for improving machine learning models. Moreover, human-annotated rationales can inspire the construction and evaluation of model-annotated rationales, which can play an important role in explainable artificial intelligence.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC11157010 | PMC |
http://dx.doi.org/10.3389/frai.2024.1260952 | DOI Listing |
Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!