Can a Robot Lie? Exploring the Folk Concept of Lying as Applied to Artificial Agents.

Cogn Sci

Center for Ethics, Department of Philosophy, University of Zurich.

Published: October 2021

The potential capacity for robots to deceive has received considerable attention recently. Many papers explore the technical possibility for a robot to engage in deception for beneficial purposes (e.g., in education or health). In this short experimental paper, I focus on a more paradigmatic case: robot lying (lying being the textbook example of deception) for nonbeneficial purposes as judged from the human point of view. More precisely, I present an empirical experiment that investigates the following three questions: (a) Are ordinary people willing to ascribe deceptive intentions to artificial agents? (b) Are they as willing to judge a robot lie as a lie as they would be when human agents engage in verbal deception? (c) Do people blame a lying artificial agent to the same extent as a lying human agent? The response to all three questions is a resounding yes. This, I argue, implies that robot deception and its normative consequences deserve considerably more attention than they presently receive.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC9285490PMC
http://dx.doi.org/10.1111/cogs.13032DOI Listing

Publication Analysis

Top Keywords

three questions
8
robot
5
lying
5
robot lie?
4
lie? exploring
4
exploring folk
4
folk concept
4
concept lying
4
lying applied
4
applied artificial
4

Similar Publications

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!