Experimentally investigating the relationship between moral judgment and action is difficult when the action of interest entails harming others. We adopt a new approach to this problem by placing subjects in an immersive, virtual reality environment that simulates the classic "trolley problem." In this moral dilemma, the majority of research participants behaved as "moral utilitarians," either (a) acting to cause the death of one individual in order to save the lives of five others, or (b) abstaining from action, when that action would have caused five deaths versus one. Confirming the emotional distinction between moral actions and omissions, autonomic arousal was greater when the utilitarian outcome required action, and increased arousal was associated with a decreased likelihood of utilitarian-biased behavior. This pattern of results held across individuals of different gender, age, and race.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1037/a0025561 | DOI Listing |
Sci Rep
January 2025
College of Policy Studies, Tsuda University, Tokyo, 151-0051, Japan.
As artificial intelligence (AI) technology is introduced into different areas of society, understanding people's willingness to accept AI decisions emerges as a critical scientific and societal issue. It is an open question whether people can accept the judgement of humans or AI in situations where they are unsure of their judgement, as in the trolley problem. Here, we focus on justified defection (non-cooperation with a bad person) in indirect reciprocity because it has been shown that people avoid judging justified defection as good or bad.
View Article and Find Full Text PDFCJEM
January 2025
Department of Emergency Medicine, University of British Columbia, Vancouver, BC, Canada.
Sci Eng Ethics
October 2024
Chinese Institue of Foreign Philosophy, Peking University, 5 Yiheyuan Road, Beijing, China.
Med Health Care Philos
December 2024
Institute for Biomedical Ethics, University of Basel, Basel, Switzerland.
Sacrificial dilemmas such as the trolley problem play an important role in experimental philosophy (x-phi). But it is increasingly argued that, since we are not likely to encounter runaway trolleys in our daily life, the usefulness of such thought experiments for understanding moral judgments in more ecologically valid contexts may be limited. However, similar sacrificial dilemmas are experienced in real life by animal research decision makers.
View Article and Find Full Text PDFAI Ethics
April 2023
Department of Informatics, University of Zurich, Binzmühlestrasse 14, 8050 Zurich, Switzerland.
Ongoing debates about ethical guidelines for autonomous vehicles mostly focus on variations of the 'Trolley Problem'. Using variations of this ethical dilemma in preference surveys, possible implications for autonomous vehicles policy are discussed. In this work, we argue that the lack of realism in such scenarios leads to limited practical insights.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!