No silver bullet: interpretable ML models must be explained.

Front Artif Intell

Department of Data Science and Artificial Intelligence, Faculty of Information Technology, Monash University, Melbourne, VIC, Australia.

Published: April 2023

Recent years witnessed a number of proposals for the use of the so-called interpretable models in specific application domains. These include high-risk, but also safety-critical domains. In contrast, other works reported some pitfalls of machine learning model interpretability, in part justified by the lack of a rigorous definition of what an interpretable model should represent. This study proposes to relate interpretability with the ability of a model to offer explanations of why a prediction is made given some point in feature space. Under this general goal of offering explanations to predictions, this study reveals additional limitations of interpretable models. Concretely, this study considers application domains where the purpose is to help human decision makers to understand why some prediction was made or why was not some other prediction made, and where irreducible (and so minimal) information is sought. In such domains, this study argues that answers to such why (or why not) questions can exhibit arbitrary redundancy, i.e., the answers can be simplified, as long as these answers are obtained by human inspection of the interpretable ML model representation.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC10165097PMC
http://dx.doi.org/10.3389/frai.2023.1128212DOI Listing

Publication Analysis

Top Keywords

interpretable models
12
application domains
8
interpretable model
8
interpretable
5
silver bullet
4
bullet interpretable
4
models explained
4
explained years
4
years witnessed
4
witnessed number
4

Similar Publications

Traumatic injury remains a leading cause of death worldwide, with traumatic bleeding being one of its most critical and fatal consequences. The use of whole-body computed tomography (WBCT) in trauma management has rapidly expanded. However, interpreting WBCT images within the limited time available before treatment is particularly challenging for acute care physicians.

View Article and Find Full Text PDF

In light of the increasing importance for measuring myelin ratios - the ratio of axon-to-fiber (axon + myelin) diameters in myelin internodes - to understand normal physiology, disease states, repair mechanisms and myelin plasticity, there is urgent need to minimize processing and statistical artifacts in current methodologies. Many contemporary studies fall prey to a variety of artifacts, reducing study outcome robustness and slowing development of novel therapeutics. Underlying causes stem from a lack of understanding of the myelin ratio, which has persisted more than a century.

View Article and Find Full Text PDF

Background: Accurate assessment of cardiovascular disease (CVD) risk is crucial for effective prevention and resource allocation. However, few CVD risk estimation tools consider social determinants of health (SDoH), despite their known impact on CVD risk. We aimed to estimate 10-year CVD risk in the Eastern Caribbean Health Outcomes Research Network Cohort Study (ECS) across multiple risk estimation instruments and assess the association between SDoH and CVD risk.

View Article and Find Full Text PDF

Appraisal models, such as the Scherer's Component Process Model (CPM), represent an elegant framework for the interpretation of emotion processes, advocating for computational models that capture emotion dynamics. Today's emotion recognition research, however, typically classifies discrete qualities or categorised dimensions, neglecting the dynamic nature of emotional processes and thus limiting interpretability based on appraisal theory. In our research, we estimate emotion intensity from multiple physiological features associated to the CPM's neurophysiological component using dynamical models with the aim of bringing insights into the relationship between physiological dynamics and perceived emotion intensity.

View Article and Find Full Text PDF

Vistla: identifying influence paths with information theory.

Bioinformatics

January 2025

Interdisciplinary Centre for Mathematical and Computational Modelling, University of Warsaw, Warsaw, 02-106, Poland.

Motivation: It is a challenging task to decipher the mechanisms of a complex system from observational data; especially in biology, where systems are sophisticated, measurements coarse and multi-modality is a common trait. The typical approaches of inferring a network of relationships between system's components struggle with the quality and feasibility of estimation, as well as with the interpretability of the results they yield.Said issues can be avoided, however, when dealing with a simpler problem of tracking only the influence paths, defined as circuits relying the information of an experimental perturbation as it spreads through the system.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!