Moral Judgments of Human vs. AI Agents in Moral Dilemmas.

Behav Sci (Basel)

School of Marxism, Tsinghua University, Beijing 100084, China.

Published: February 2023

Artificial intelligence has quickly integrated into human society and its moral decision-making has also begun to slowly seep into our lives. The significance of moral judgment research on artificial intelligence behavior is becoming increasingly prominent. The present research aims at examining how people make moral judgments about the behavior of artificial intelligence agents in a trolley dilemma where people are usually driven by controlled cognitive processes, and in a footbridge dilemma where people are usually driven by automatic emotional responses. Through three experiments ( = 626), we found that in the trolley dilemma (Experiment 1), the agent type rather than the actual action influenced people's moral judgments. Specifically, participants rated AI agents' behavior as more immoral and deserving of more blame than humans' behavior. Conversely, in the footbridge dilemma (Experiment 2), the actual action rather than the agent type influenced people's moral judgments. Specifically, participants rated action (a utilitarian act) as less moral and permissible and more morally wrong and blameworthy than inaction (a deontological act). A mixed-design experiment provided a pattern of results consistent with Experiment 1 and Experiment 2 (Experiment 3). This suggests that in different types of moral dilemmas, people adapt different modes of moral judgment to artificial intelligence, this may be explained by that when people make moral judgments in different types of moral dilemmas, they are engaging different processing systems.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC9951994PMC
http://dx.doi.org/10.3390/bs13020181DOI Listing

Publication Analysis

Top Keywords

moral judgments
20
artificial intelligence
16
moral
12
moral dilemmas
12
moral judgment
8
judgment artificial
8
people moral
8
trolley dilemma
8
dilemma people
8
people driven
8

Similar Publications

Moral Complexity in Traffic: Advancing the ADC Model for Automated Driving Systems.

Sci Eng Ethics

January 2025

Department of Philosophy and Religious Studies, North Carolina State University, Raleigh, NC, USA.

The incorporation of ethical settings in Automated Driving Systems (ADSs) has been extensively discussed in recent years with the goal of enhancing potential stakeholders' trust in the new technology. However, a comprehensive ethical framework for ADS decision-making, capable of merging multiple ethical considerations and investigating their consistency is currently missing. This paper addresses this gap by providing a taxonomy of ADS decision-making based on the Agent-Deed-Consequences (ADC) model of moral judgment.

View Article and Find Full Text PDF

In the field of moral psychology, traditional perspectives often evaluate anger based on its consequences, either validating or condemning it for its perceived benefits or harms. This paper argues for a shift in focus from the outcomes of anger to its moral and psychological foundations. By integrating insights from psychological research, this study posits that the fundamental nature of anger is intrinsically linked to the quest for recognition.

View Article and Find Full Text PDF

Interpersonal trust is the premise and foundation of encouraging cooperation in this age of rapid progress. The purpose of this study was to investigate how moral judgment affects bystanders' interpersonal trust and its internal mechanisms when there are ethical transgressions. The moral judgment of the evaluators was divided into three categories-opposition, neutrality and approval-on the basis of the moral transgressions of the offenders.

View Article and Find Full Text PDF

Recent theoretical work has argued that moral psychology can be understood through the lens of "resource rational contractualism." The view posits that the best way of making a decision that affects other people is to get everyone together to negotiate under idealized conditions. The outcome of that negotiation is an arrangement (or "contract") that would lead to mutual benefit.

View Article and Find Full Text PDF

AI contextual information shapes moral and aesthetic judgments of AI-generated visual art.

Cognition

January 2025

Social Brain Sciences Group, Department of Humanities, Social and Political Sciences, ETH Zurich, Zurich, Switzerland. Electronic address:

Throughout history, art creation has been regarded as a uniquely human means to express original ideas, emotions, and experiences. However, as Generative Artificial Intelligence reshapes visual, aesthetic, legal, and economic culture, critical questions arise about the moral and aesthetic implications of AI-generated art. Despite the growing use of AI tools in art, the moral impact of AI involvement in the art creation process remains underexplored.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!