Artificial Intelligence Models Do Not Ground Negation, Humans Do. GuessWhat?! Dialogues as a Case Study.

Front Big Data

Department of Information Engineering and Computer Science, University of Trento, Trento, Italy.

Published: January 2022

Negation is widely present in human communication, yet it is largely neglected in the research on conversational agents based on neural network architectures. Cognitive studies show that a supportive visual context makes the processing of negation easier. We take GuessWhat?!, a referential visually grounded guessing game, as test-bed and evaluate to which extent guessers based on pre-trained language models profit from negatively answered polar questions. Moreover, to get a better grasp of models' results, we select a controlled sample of games and run a crowdsourcing experiment with subjects. We evaluate models and humans against the same settings and use the comparison to better interpret the models' results. We show that while humans profit from negatively answered questions to solve the task, models struggle in grounding negation, and some of them barely use it; however, when the language signal is poorly informative, visual features help encoding the negative information. Finally, the experiments with human subjects put us in the position of comparing humans and models' predictions and get a grasp about which models make errors that are more human-like and as such more plausible.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC8819179PMC
http://dx.doi.org/10.3389/fdata.2021.736709DOI Listing

Publication Analysis

Top Keywords

profit negatively
8
negatively answered
8
models
5
artificial intelligence
4
intelligence models
4
models ground
4
negation
4
ground negation
4
humans
4
negation humans
4

Similar Publications

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!