People's explanations for social events powerfully affect their socioemotional responses. We examine why explanations affect emotions, with a specific focus on how external explanations for negative aspects of an outgroup can create compassion for the outgroup. The dominant model of these processes suggests that external explanations can reduce perceived control and that compassion is evoked when negative aspects of an outgroup are perceived as beyond their control. We agree that perceived control is important, but we propose a model in which explanations also affect perceived suffering of an outgroup, and that perceived suffering is an additional mechanism connecting external explanations to compassion. Studies are presented that support our integrative dual-mediation model and that pinpoint factors-depth of cognitive processing, expansive sense of identity-that modulate the extent to which the external explanation/perceived suffering mechanism evokes compassion.

Download full-text PDF

Source
http://dx.doi.org/10.1177/0146167212460281DOI Listing

Publication Analysis

Top Keywords

external explanations
16
perceived control
16
explanations affect
8
negative aspects
8
aspects outgroup
8
outgroup perceived
8
perceived suffering
8
explanations
7
perceived
6
external
5

Similar Publications

Background: Gangrenous cholecystitis (GC) is a serious clinical condition associated with high morbidity and mortality rates. Machine learning (ML) has significant potential in addressing the diverse characteristics of real data. We aim to develop an explainable and cost-effective predictive model for GC utilizing ML and Shapley Additive explanation (SHAP) algorithm.

View Article and Find Full Text PDF

Background: Acute kidney injury (AKI) is a common complication in hospitalized older patients, associated with increased morbidity, mortality, and health care costs. Major adverse kidney events within 30 days (MAKE30), a composite of death, new renal replacement therapy, or persistent renal dysfunction, has been recommended as a patient-centered endpoint for clinical trials involving AKI.

Objective: This study aimed to develop and validate a machine learning-based model to predict MAKE30 in hospitalized older patients with AKI.

View Article and Find Full Text PDF
Article Synopsis
  • This study developed a machine learning ensemble model for monitoring valproic acid (VPA) in pediatric epilepsy patients to improve clinical accuracy in treatment.
  • The model utilized data from 252 patients, using various algorithms like Gradient Boosting Regression Trees and Random Forest Regression, achieving high relative accuracy (87.8%) and low error rates.
  • Key factors affecting VPA levels included platelet count and daily dose, indicating the model's potential to enhance clinical decision-making in VPA management.
View Article and Find Full Text PDF

The causal explanations voice-hearers have for their voice-hearing experiences may influence affective outcome and clinical decision making. Voice-hearers endorse a range of explanatory models, which do not consistently align with explanatory models held by healthcare professionals. Research has established that explanatory models for voice-hearing are dynamic rather than fixed, and are influenced by internal beliefs and motivations, culture, and contact with significant others.

View Article and Find Full Text PDF

Background: Major adverse cardiovascular events (MACEs) within 30 days following noncardiac surgery are prognostically relevant. Accurate prediction of risk and modifiable risk factors for postoperative MACEs is critical for surgical planning and patient outcomes. We aimed to develop and validate an accurate and easy-to-use machine learning model for predicting postoperative MACEs in geriatric patients undergoing noncardiac surgery.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!