Explainability in medicine in an era of AI-based clinical decision support systems.

Front Genet

Department of Intensive Care Medicine and Centre for Justifiable Digital Healthcare, Ghent University Hospital, Ghent, Belgium.

Published: September 2022

The combination of "Big Data" and Artificial Intelligence (AI) is frequently promoted as having the potential to deliver valuable health benefits when applied to medical decision-making. However, the responsible adoption of AI-based clinical decision support systems faces several challenges at both the individual and societal level. One of the features that has given rise to particular concern is the issue of explainability, since, if the way an algorithm arrived at a particular output is not known (or knowable) to a physician, this may lead to multiple challenges, including an inability to evaluate the merits of the output. This "opacity" problem has led to questions about whether physicians are justified in relying on the algorithmic output, with some scholars insisting on the centrality of explainability, while others see no reason to require of AI that which is not required of physicians. We consider that there is merit in both views but find that greater nuance is necessary in order to elucidate the underlying function of explainability in clinical practice and, therefore, its relevance in the context of AI for clinical use. In this paper, we explore explainability by examining what it requires in clinical medicine and draw a distinction between the function of explainability for the current patient versus the future patient. This distinction has implications for what explainability requires in the short and long term. We highlight the role of transparency in explainability, and identify semantic transparency as fundamental to the issue of explainability itself. We argue that, in day-to-day clinical practice, accuracy is sufficient as an "epistemic warrant" for clinical decision-making, and that the most compelling reason for requiring explainability in the sense of scientific or causal explanation is the potential for improving future care by building a more robust model of the world. We identify the goal of clinical decision-making as being to deliver the best possible outcome as often as possible, and find-that accuracy is sufficient justification for intervention for today's patient, as long as efforts to uncover scientific explanations continue to improve healthcare for future patients.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC9527344PMC
http://dx.doi.org/10.3389/fgene.2022.903600DOI Listing

Publication Analysis

Top Keywords

explainability
10
clinical
8
ai-based clinical
8
clinical decision
8
decision support
8
support systems
8
issue explainability
8
function explainability
8
clinical practice
8
accuracy sufficient
8

Similar Publications

Alzheimer's disease (AD) and other neurodegenerative illnesses place a heavy strain on the world's healthcare systems, particularly among the aging population. With a focus on research from January 2022 to September 2023, this scoping review, which adheres to Preferred Reporting Items for Systematic Reviews and Meta-Analysis extension for Scoping Reviews (PRISMA-Scr) criteria, examines the changing landscape of artificial intelligence (AI) applications for early AD detection and diagnosis. Forty-four carefully chosen articles were selected from a pool of 2,966 articles for the qualitative synthesis.

View Article and Find Full Text PDF

A hybrid machine learning approach for the personalized prognostication of aggressive skin cancers.

NPJ Digit Med

January 2025

Mike Toth Head and Neck Cancer Research Center, Division of Surgical Oncology, Department of Otolaryngology-Head and Neck Surgery, Mass Eye and Ear, Boston, MA, USA.

Accurate prognostication guides optimal clinical management in skin cancer. Merkel cell carcinoma (MCC) is the most aggressive form of skin cancer that often presents in advanced stages and is associated with poor survival rates. There are no personalized prognostic tools in use in MCC.

View Article and Find Full Text PDF

Predecting power transformer health index and life expectation based on digital twins and multitask LSTM-GRU model.

Sci Rep

January 2025

Department of Embedded Network Systems and Technology, Faculty of Artificial Intelligence, Kafrelsheikh University, El-Geish St, Kafrelsheikh, 33516, Egypt.

Power transformers play a crucial role in enabling the integration of renewable energy sources and improving the overall efficiency and reliability of smart grid systems. They facilitate the conversion, transmission, and distribution of power from various sources and help to balance the load between different parts of the grid. The Transformer Health Index (THI) is one of the most important indicators of ensuring their reliability and preventing unplanned outages.

View Article and Find Full Text PDF

Purpose: To implement and evaluate deep learning-based methods for the classification of pediatric brain tumors (PBT) in magnetic resonance (MR) data.

Methods: A subset of the "Children's Brain Tumor Network" dataset was retrospectively used ( = 178 subjects, female = 72, male = 102, NA = 4, age range [0.01, 36.

View Article and Find Full Text PDF

This study illustrates the use of chemical fingerprints with machine learning for blood-brain barrier (BBB) permeability prediction. Employing the Blood Brain Barrier Database (B3DB) dataset for BBB permeability prediction, we extracted nine different fingerprints. Support Vector Machine (SVM) and Extreme Gradient Boosting (XGBoost) algorithms were used to develop models for permeability prediction.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!