People's causal judgments are susceptible to the action effect, whereby they judge actions to be more causal than inactions. We offer a new explanation for this effect, the counterfactual explanation: people judge actions to be more causal than inactions because they are more inclined to consider the counterfactual alternatives to actions than to consider counterfactual alternatives to inactions. Experiment 1a conceptually replicates the original action effect for causal judgments. Experiment 1b confirms a novel prediction of the new explanation, the reverse action effect, in which people judge inactions to be more causal than actions in overdetermination cases. Experiment 2 directly compares the two effects in joint-causation and overdetermination scenarios and conceptually replicates them with new scenarios. Taken together, these studies provide support for the new counterfactual explanation for the action effect in causal judgment.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1016/j.cognition.2019.05.006 | DOI Listing |
Neuroimage
January 2025
College of Artificial Intelligence, Nanjing University of Aeronautics and Astronautics, Nanjing, 210016, China; Shenzhen Research Institute, Nanjing University of Aeronautics and Astronautics, Shenzhen, 518038, China; Key Laboratory of Brain-Machine Intelligence Technology, Ministry of Education, Nanjing, 210016, China. Electronic address:
The structural-functional brain connections coupling (SC-FC coupling) describes the relationship between white matter structural connections and the corresponding functional activation or functional connections. It has been widely used to identify brain disorders. However, the existing research on SC-FC coupling focuses on global and regional scales, and few studies have investigated the impact of brain disorders on this relationship from the perspective of multi-brain region cooperation (i.
View Article and Find Full Text PDFNeural Netw
December 2024
College of Science, Shantou University, Shantou 515063, China. Electronic address:
The explainability of Graph Neural Networks (GNNs) is critical to various GNN applications, yet it remains a significant challenge. A convincing explanation should be both necessary and sufficient simultaneously. However, existing GNN explaining approaches focus on only one of the two aspects, necessity or sufficiency, or a heuristic trade-off between the two.
View Article and Find Full Text PDFJ Mol Graph Model
December 2024
Department of Chemistry, Faculty of Science, Imam Mohammad Ibn Saud Islamic University (IMSIU), Riyadh,11623, Saudi Arabia. Electronic address:
The work being presented now combines severe gradient boosting with Shapley values, a thriving merger within the field of explainable artificial intelligence. We also use a genetic algorithm to analyse the HDAC1 inhibitory activity of a broad pool of 1274 molecules experimentally reported for HDAC1 inhibition. We conduct this analysis to ascertain the HDAC1 inhibitory activity of these molecules.
View Article and Find Full Text PDF2024 Int Conf Act Behav Comput (2024)
May 2024
Stevens Institute of Technology, Hoboken, New Jersey.
As an increasing number of states adopt more permissive cannabis regulations, the necessity of gaining a comprehensive understanding of cannabis's effects on young adults has grown exponentially, driven by its escalating prevalence of use. By leveraging popular eXplainable Artificial Intelligence (XAI) techniques such as SHAP (SHapley Additive exPlanations), rule-based explanations, intrinsically interpretable models, and counterfactual explanations, we undertake an exploratory but in-depth examination of the impact of cannabis use on individual behavioral patterns and physiological states. This study explores the possibility of facilitating algorithmic decision-making by combining interpretable artificial intelligence (XAI) techniques with sensor data, with the aim of providing researchers and clinicians with personalized analyses of cannabis intoxication behavior.
View Article and Find Full Text PDFbioRxiv
November 2024
Else Kroener Fresenius Center for Digital Health (EKFZ), Faculty of Medicine and University Hospital Carl Gustav Carus, TUD Dresden University of Technology, Dresden, Germany.
Deep learning can extract predictive and prognostic biomarkers from histopathology whole slide images, but its interpretability remains elusive. We develop and validate MoPaDi (Morphing histoPathology Diffusion), which generates counterfactual mechanistic explanations. MoPaDi uses diffusion autoencoders to manipulate pathology image patches and flip their biomarker status by changing the morphology.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!