With the ongoing rise of machine learning, the need for methods for explaining decisions made by artificial intelligence systems is becoming a more and more important topic. Especially for image classification tasks, many state-of-the-art tools to explain such classifiers rely on visual highlighting of important areas of the input data. Contrary, counterfactual explanation systems try to enable a counterfactual reasoning by modifying the input image in a way such that the classifier would have made a different prediction. By doing so, the users of counterfactual explanation systems are equipped with a completely different kind of explanatory information. However, methods for generating realistic counterfactual explanations for image classifiers are still rare. Especially in medical contexts, where relevant information often consists of textural and structural information, high-quality counterfactual images have the potential to give meaningful insights into decision processes. In this work, we present , an approach to generate such counterfactual image explanations based on adversarial image-to-image translation techniques. Additionally, we conduct a user study to evaluate our approach in an exemplary medical use case. Our results show that, in the chosen medical use-case, counterfactual explanations lead to significantly better results regarding mental models, explanation satisfaction, trust, emotions, and self-efficacy than two state-of-the art systems that work with saliency maps, namely LIME and LRP.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC9024220PMC
http://dx.doi.org/10.3389/frai.2022.825565DOI Listing

Publication Analysis

Top Keywords

counterfactual explanation
8
explanation systems
8
counterfactual explanations
8
counterfactual
7
ganterfactual-counterfactual explanations
4
medical
4
explanations medical
4
medical non-experts
4
non-experts generative
4
generative adversarial
4

Similar Publications

Semantic prioritization in visual counterfactual explanations with weighted segmentation and auto-adaptive region selection.

Neural Netw

December 2024

Department of Artificial Intelligence, Korea University, 02841, Seoul, Republic of Korea. Electronic address:

Article Synopsis
  • Traditional techniques for visual counterfactual explanations often replace parts of a target image with sections from unrelated images, which can reduce the clarity of the model's intention.
  • The study introduces WSAE-Net, a method that creates a weighted semantic map to improve computational efficiency and uses an auto-adaptive editing sequence to ensure that replacements are semantically relevant.
  • Experimental results show that WSAE-Net outperforms previous methods, leading to better interpretability and understanding of counterfactual explanations in visual contexts.
View Article and Find Full Text PDF

Local structural-functional coupling with counterfactual explanations for epilepsy prediction.

Neuroimage

January 2025

College of Artificial Intelligence, Nanjing University of Aeronautics and Astronautics, Nanjing, 210016, China; Shenzhen Research Institute, Nanjing University of Aeronautics and Astronautics, Shenzhen, 518038, China; Key Laboratory of Brain-Machine Intelligence Technology, Ministry of Education, Nanjing, 210016, China. Electronic address:

The structural-functional brain connections coupling (SC-FC coupling) describes the relationship between white matter structural connections and the corresponding functional activation or functional connections. It has been widely used to identify brain disorders. However, the existing research on SC-FC coupling focuses on global and regional scales, and few studies have investigated the impact of brain disorders on this relationship from the perspective of multi-brain region cooperation (i.

View Article and Find Full Text PDF

The explainability of Graph Neural Networks (GNNs) is critical to various GNN applications, yet it remains a significant challenge. A convincing explanation should be both necessary and sufficient simultaneously. However, existing GNN explaining approaches focus on only one of the two aspects, necessity or sufficiency, or a heuristic trade-off between the two.

View Article and Find Full Text PDF

The work being presented now combines severe gradient boosting with Shapley values, a thriving merger within the field of explainable artificial intelligence. We also use a genetic algorithm to analyse the HDAC1 inhibitory activity of a broad pool of 1274 molecules experimentally reported for HDAC1 inhibition. We conduct this analysis to ascertain the HDAC1 inhibitory activity of these molecules.

View Article and Find Full Text PDF

As an increasing number of states adopt more permissive cannabis regulations, the necessity of gaining a comprehensive understanding of cannabis's effects on young adults has grown exponentially, driven by its escalating prevalence of use. By leveraging popular eXplainable Artificial Intelligence (XAI) techniques such as SHAP (SHapley Additive exPlanations), rule-based explanations, intrinsically interpretable models, and counterfactual explanations, we undertake an exploratory but in-depth examination of the impact of cannabis use on individual behavioral patterns and physiological states. This study explores the possibility of facilitating algorithmic decision-making by combining interpretable artificial intelligence (XAI) techniques with sensor data, with the aim of providing researchers and clinicians with personalized analyses of cannabis intoxication behavior.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!