Download full-text PDF

Source
http://dx.doi.org/10.1080/15265161.2025.2470668DOI Listing

Publication Analysis

Top Keywords

mapping moralizing
4
moralizing response
4
response commentaries
4
mapping
1
response
1
commentaries
1

Similar Publications

Neural correlates of the sense of agency in free and coerced moral decision-making among civilians and military personnel.

Cereb Cortex

March 2025

CO3 Lab, Center for Research in Cognition and Neuroscience, Université libre de Bruxelles, Avenue Antoine Depage, 50, 1050, Brussels, Belgium.

The sense of agency, the feeling of being the author of one's actions and outcomes, is critical for decision-making. While prior research has explored its neural correlates, most studies have focused on neutral tasks, overlooking moral decision-making. In addition, previous studies mainly used convenience samples, ignoring that some social environments may influence how authorship in moral decision-making is processed.

View Article and Find Full Text PDF

Background: Research Ethics Committees (RECs) review the ethical, legal, and methodological standards of clinical research. Complying with all requirements and professional expectations while maintaining the necessary scientific and ethical standards can be challenging for applicants and members of the REC alike. There is a need for accessible guidelines and resources to help medical researchers and REC members navigate the legal and ethical requirements and the process of their review.

View Article and Find Full Text PDF

As artificial intelligence (AI) systems begin to take on social roles traditionally filled by humans, it will be crucial to understand how this affects people's cooperative expectations. In the case of human-human dyads, different relationships are governed by different norms: For example, how two strangers-versus two friends or colleagues-should interact when faced with a similar coordination problem often differs. How will the rise of 'social' artificial intelligence (and ultimately, superintelligent AI) complicate people's expectations about the cooperative norms that should govern different types of relationships, whether human-human or human-AI? Do people expect AI to adhere to the same cooperative dynamics as humans when in a given social role? Conversely, will they begin to expect humans in certain types of relationships to act more like AI? Here, we consider how people's cooperative expectations may pull apart between human-human and human-AI relationships, detailing an empirical proposal for mapping these distinctions across relationship types.

View Article and Find Full Text PDF

Objectives: The Observational Medical Outcomes Partnership common data model (OMOP-CDM) is a useful tool for large-scale network analysis but currently lacks a structured approach to pregnancy episodes. We aimed to develop and implement a perinatal expansion for the OMOP-CDM to facilitate perinatal network research.

Methods: We collaboratively developed a perinatal expansion with input from domain experts and stakeholders to reach consensus.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!