Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1136/bmj.f2728 | DOI Listing |
J Exp Psychol Gen
October 2024
Department of Psychology, Harvard University.
For contractualist accounts of morality, actions are moral if they correspond to what rational or reasonable agents would agree to do, were they to negotiate explicitly. This, in turn, often depends on each party's bargaining power, which varies with each party's stakes in the potential agreement and available alternatives in case of disagreement. If there is an asymmetry, with one party enjoying higher bargaining power than another, this party can usually get a better deal, as often happens in real negotiations.
View Article and Find Full Text PDFJ Res Adolesc
December 2024
North Carolina State University, Raleigh, North Carolina, USA.
Cognition
January 2025
Baylor University, Hankamer School of Business, One Bear Place #98001, Waco, TX 76798, United States of America. Electronic address:
The growing prevalence of artificial intelligence (AI) in our lives has brought the impact of AI-based decisions on human judgments to the forefront of academic scholarship and public debate. Despite growth in research on people's receptivity towards AI, little is known about how interacting with AI shapes subsequent interactions among people. We explore this question in the context of unfair decisions determined by AI versus humans and focus on the spillover effects of experiencing such decisions on the propensity to act prosocially.
View Article and Find Full Text PDFDev Psychol
September 2024
Department of Psychology, Columbia University.
Punishment is a key mechanism to regulate selfish behaviors and maintain cooperation in a society. However, children often show mixed evaluations about third-party punishment. The current work asked how punishment severity might shape children's social judgments.
View Article and Find Full Text PDFSci Rep
August 2024
Discipline of Business Analytics, The University of Sydney Business School, The University of Sydney, Camperdown, NSW, 2006, Australia.
Fairness in machine learning (ML) emerges as a critical concern as AI systems increasingly influence diverse aspects of society, from healthcare decisions to legal judgments. Many studies show evidence of unfair ML outcomes. However, the current body of literature lacks a statistically validated approach that can evaluate the fairness of a deployed ML algorithm against a dataset.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!