Intentional machines: A defence of trust in medical artificial intelligence.

Bioethics

Donders Institute for Brain, Cognition, and Behaviour, Radboud University, Nijmegen, The Netherlands.

Published: February 2022

Trust constitutes a fundamental strategy to deal with risks and uncertainty in complex societies. In line with the vast literature stressing the importance of trust in doctor-patient relationships, trust is therefore regularly suggested as a way of dealing with the risks of medical artificial intelligence (AI). Yet, this approach has come under charge from different angles. At least two lines of thought can be distinguished: (1) that trusting AI is conceptually confused, that is, that we cannot trust AI; and (2) that it is also dangerous, that is, that we should not trust AI-particularly if the stakes are as high as they routinely are in medicine. In this paper, we aim to defend a notion of trust in the context of medical AI against both charges. To do so, we highlight the technically mediated intentions manifest in AI systems, rendering trust a conceptually plausible stance for dealing with them. Based on literature from human-robot interactions, psychology and sociology, we then propose a novel model to analyse notions of trust, distinguishing between three aspects: reliability, competence, and intentions. We discuss each aspect and make suggestions regarding how medical AI may become worthy of our trust.

Download full-text PDF

Source
http://dx.doi.org/10.1111/bioe.12891DOI Listing

Publication Analysis

Top Keywords

trust
10
medical artificial
8
artificial intelligence
8
intentional machines
4
machines defence
4
defence trust
4
medical
4
trust medical
4
intelligence trust
4
trust constitutes
4

Similar Publications

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!