People expect artificial moral advisors to be more utilitarian and distrust utilitarian moral advisors.

Cognition

School of Psychology, University of Kent, Canterbury, Kent, CT2 7NP, UK. Electronic address:

Published: December 2024

As machines powered by artificial intelligence increase in their technological capacities, there is a growing interest in the theoretical and practical idea of artificial moral advisors (AMAs): systems powered by artificial intelligence that are explicitly designed to assist humans in making ethical decisions. Across four pre-registered studies (total N = 2604) we investigated how people perceive and trust artificial moral advisors compared to human advisors. Extending previous work on algorithmic aversion, we show that people have a significant aversion to AMAs (vs humans) giving moral advice, while also showing that this is particularly the case when advisors - human and AI alike - gave advice based on utilitarian principles. We find that participants expect AI to make utilitarian decisions, and that even when participants agreed with a decision made by an AMA, they still expected to disagree with an AMA more than a human in future. Our findings suggest challenges in the adoption of artificial moral advisors, and particularly those who draw on and endorse utilitarian principles - however normatively justifiable.

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.cognition.2024.106028DOI Listing

Publication Analysis

Top Keywords

moral advisors
20
artificial moral
16
powered artificial
8
artificial intelligence
8
utilitarian principles
8
advisors
7
artificial
6
moral
6
utilitarian
5
people expect
4

Similar Publications

Background And Objectives: Primary objectives: to compare the rates of sustained clinical remission at 12 months in patients treated with antitumour necrosis factor (anti-TNF) and immunomodulators who withdraw anti-TNF treatment versus those who maintain it.

Secondary Objectives: to evaluate the effect of anti-TNF withdrawal on relapse-free time, endoscopic and radiological activity, safety, quality of life and work productivity; and to identify predictive factors for relapse.

Design: Prospective, quadruple-blind, multicentre, randomised, controlled trial.

View Article and Find Full Text PDF

The National Institute for Health and Care Excellence (NICE) was established a quarter of a century ago in 1999 to regulate the cost-effectiveness of pharmaceuticals (and other health technologies) for the NHS. Drawing on medical sociology theories of corporate bias, neoliberalism, pluralism/polycentricity and regulatory capture, the purpose of this article is to examine the applicability of those theories to NICE as a key regulatory agency in the UK health system. Based on approximately 7 years of documentary research, interviews with expert informants and observations of NICE-related meetings, this paper focuses particularly on NICE's relationship with the interests of the pharmaceutical industry compared with other stakeholder interests at the meso-organisational level.

View Article and Find Full Text PDF

People expect artificial moral advisors to be more utilitarian and distrust utilitarian moral advisors.

Cognition

December 2024

School of Psychology, University of Kent, Canterbury, Kent, CT2 7NP, UK. Electronic address:

As machines powered by artificial intelligence increase in their technological capacities, there is a growing interest in the theoretical and practical idea of artificial moral advisors (AMAs): systems powered by artificial intelligence that are explicitly designed to assist humans in making ethical decisions. Across four pre-registered studies (total N = 2604) we investigated how people perceive and trust artificial moral advisors compared to human advisors. Extending previous work on algorithmic aversion, we show that people have a significant aversion to AMAs (vs humans) giving moral advice, while also showing that this is particularly the case when advisors - human and AI alike - gave advice based on utilitarian principles.

View Article and Find Full Text PDF

2024 critical review of the patient blood management (PBM) recommendations of the Spanish enhanced recovery after major surgery (via RICA).

Cir Esp (Engl Ed)

November 2024

Servicio de Medicina Interna, Complex Hospitalari Moisès Broggi, Consorci Sanitari Integral, Sant Joan Despí, Barcelona, Spain; Grupo Multidisciplinar para el Estudio y Manejo de la Anemia del Paciente Quirúrgico (Anemia Working Group España), Madrid, Spain; Grupo Español de Rehabilitación Multimodal (GERM), Madrid, Spain; Banco de Sangre y Tejidos de Navarra, Servicio Navarro de Salud, Osasunbidea, Pamplona, Spain. Electronic address:

The Spanish enhanced recovery in adult surgery strategy, the "RICA pathway", was published in 2021 and includes 19 specific recommendations and more than 20 indirect recommendations for patient blood management (PBM). After reviewing these recommendations, and in the context of the new clinical evidence available, we propose the following updates: First: Detection and treatment of any preoperative anemia status in ALL patients who are candidates for major surgery with hematinic deficiencies. Second: Universal use of tranexamic acid in major surgery, bedside monitoring of intraoperative hemoglobin levels, restrictive transfusion criteria, and monitoring of patient well-being in terms of hydration, coagulability, normothermia and analgesia.

View Article and Find Full Text PDF

Generative artificial intelligence (AI) raises ethical questions concerning moral and legal responsibility-specifically, the attributions of credit and blame for AI-generated content. For example, if a human invests minimal skill or effort to produce a beneficial output with an AI tool, can the human still take credit? How does the answer change if the AI has been personalized (i.e.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!