Experimentally investigating the relationship between moral judgment and action is difficult when the action of interest entails harming others. We adopt a new approach to this problem by placing subjects in an immersive, virtual reality environment that simulates the classic "trolley problem." In this moral dilemma, the majority of research participants behaved as "moral utilitarians," either (a) acting to cause the death of one individual in order to save the lives of five others, or (b) abstaining from action, when that action would have caused five deaths versus one. Confirming the emotional distinction between moral actions and omissions, autonomic arousal was greater when the utilitarian outcome required action, and increased arousal was associated with a decreased likelihood of utilitarian-biased behavior. This pattern of results held across individuals of different gender, age, and race.

Download full-text PDF

Source
http://dx.doi.org/10.1037/a0025561DOI Listing

Publication Analysis

Top Keywords

"trolley problem"
8
action
6
virtual morality
4
morality emotion
4
emotion action
4
action simulated
4
simulated three-dimensional
4
three-dimensional "trolley
4
problem" experimentally
4
experimentally investigating
4

Similar Publications

As artificial intelligence (AI) technology is introduced into different areas of society, understanding people's willingness to accept AI decisions emerges as a critical scientific and societal issue. It is an open question whether people can accept the judgement of humans or AI in situations where they are unsure of their judgement, as in the trolley problem. Here, we focus on justified defection (non-cooperation with a bad person) in indirect reciprocity because it has been shown that people avoid judging justified defection as good or bad.

View Article and Find Full Text PDF

A Confucian Algorithm for Autonomous Vehicles.

Sci Eng Ethics

October 2024

Chinese Institue of Foreign Philosophy, Peking University, 5 Yiheyuan Road, Beijing, China.

Article Synopsis
  • The article explores how to create a moral algorithm for autonomous vehicles that addresses complex ethical dilemmas similar to the trolley problem, where choices lead to harm.
  • It highlights a novel approach based on Confucian ethics, suggesting that this framework can effectively guide decision-making in these scenarios.
  • The discussion also covers the technical aspects of implementing this Confucian algorithm, including its integration with other moral frameworks and settings for prioritizing the protection of certain individuals.
View Article and Find Full Text PDF

Sacrificial dilemmas such as the trolley problem play an important role in experimental philosophy (x-phi). But it is increasingly argued that, since we are not likely to encounter runaway trolleys in our daily life, the usefulness of such thought experiments for understanding moral judgments in more ecologically valid contexts may be limited. However, similar sacrificial dilemmas are experienced in real life by animal research decision makers.

View Article and Find Full Text PDF

Ongoing debates about ethical guidelines for autonomous vehicles mostly focus on variations of the 'Trolley Problem'. Using variations of this ethical dilemma in preference surveys, possible implications for autonomous vehicles policy are discussed. In this work, we argue that the lack of realism in such scenarios leads to limited practical insights.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!