How transparency modulates trust in artificial intelligence.

Patterns (N Y)

Leverhulme Centre for the Future of Intelligence and Department of Engineering, University of Cambridge, Trumpington Street, Cambridge CB2 1PZ, UK.

Published: April 2022

AI Article Synopsis

Article Abstract

The study of human-machine systems is central to a variety of behavioral and engineering disciplines, including management science, human factors, robotics, and human-computer interaction. Recent advances in artificial intelligence (AI) and machine learning have brought the study of human-AI teams into sharper focus. An important set of questions for those designing human-AI interfaces concerns trust, transparency, and error tolerance. Here, we review the emerging literature on this important topic, identify open questions, and discuss some of the pitfalls of human-AI team research. We present opposition (extreme algorithm aversion or distrust) and loafing (extreme automation complacency or bias) as lying at opposite ends of a spectrum, with algorithmic vigilance representing an ideal mid-point. We suggest that, while transparency may be crucial for facilitating appropriate levels of trust in AI and thus for counteracting aversive behaviors and promoting vigilance, transparency should not be conceived solely in terms of the explainability of an algorithm. Dynamic task allocation, as well as the communication of confidence and performance metrics-among other strategies-may ultimately prove more useful to users than explanations from algorithms and significantly more effective in promoting vigilance. We further suggest that, while both aversive and appreciative attitudes are detrimental to optimal human-AI team performance, strategies to curb aversion are likely to be more important in the longer term than those attempting to mitigate appreciation. Our wider aim is to channel disparate efforts in human-AI team research into a common framework and to draw attention to the ecological validity of results in this field.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC9023880PMC
http://dx.doi.org/10.1016/j.patter.2022.100455DOI Listing

Publication Analysis

Top Keywords

human-ai team
12
artificial intelligence
8
promoting vigilance
8
human-ai
5
transparency
4
transparency modulates
4
modulates trust
4
trust artificial
4
intelligence study
4
study human-machine
4

Similar Publications

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!