Despite the myriad peer-reviewed papers demonstrating novel Artificial Intelligence (AI)-based solutions to COVID-19 challenges during the pandemic, few have made a significant clinical impact, especially in diagnosis and disease precision staging. One major cause for such low impact is the lack of model transparency, significantly limiting the AI adoption in real clinical practice. To solve this problem, AI models need to be explained to users. Thus, we have conducted a comprehensive study of Explainable Artificial Intelligence (XAI) using PRISMA technology. Our findings suggest that XAI can improve model performance, instill trust in the users, and assist users in decision-making. In this systematic review, we introduce common XAI techniques and their utility with specific examples of their application. We discuss the evaluation of XAI results because it is an important step for maximizing the value of AI-based clinical decision support systems. Additionally, we present the traditional, modern, and advanced XAI models to demonstrate the evolution of novel techniques. Finally, we provide a best practice guideline that developers can refer to during the model experimentation. We also offer potential solutions with specific examples for common challenges in AI model experimentation. This comprehensive review, hopefully, can promote AI adoption in biomedicine and healthcare.

Download full-text PDF

Source
http://dx.doi.org/10.1109/RBME.2022.3185953DOI Listing

Publication Analysis

Top Keywords

artificial intelligence
12
explainable artificial
8
systematic review
8
specific examples
8
model experimentation
8
xai
5
intelligence methods
4
methods combating
4
combating pandemics
4
pandemics systematic
4

Similar Publications

Purpose: This study explores how corporate social responsibility (CSR) and artificial intelligence (AI) can be combined in the healthcare industry during the post-COVID-19 recovery phase. The aim is to showcase how this fusion can help tackle healthcare inequalities, enhance accessibility and support long-term sustainability.

Design/methodology/approach: Adopting a viewpoint approach, the study leverages existing literature and case studies to analyze the intersection of CSR and AI.

View Article and Find Full Text PDF

Background: Machine learning (ML) is increasingly used to predict clinical deterioration in intensive care unit (ICU) patients through scoring systems. Although promising, such algorithms often overfit their training cohort and perform worse at new hospitals. Thus, external validation is a critical - but frequently overlooked - step to establish the reliability of predicted risk scores to translate them into clinical practice.

View Article and Find Full Text PDF

Introduction: Overcrowding in emergency departments (ED) is a major public health issue, leading to increased workload and exhaustion for the teams, resulting poor outcomes. It seems interesting to be able to predict the admissions of patients in the ED.

Aim: The main objective of this study was to build and test a prediction tool for ED admissions using artificial intelligence.

View Article and Find Full Text PDF

Background: Advances in artificial intelligence and machine learning have facilitated the creation of mortality prediction models which are increasingly used to assess quality of care and inform clinical practice. One open question is whether a hospital should utilize a mortality model trained from a diverse nationwide dataset or use a model developed primarily from their local hospital data.

Objective: To compare performance of a single-hospital, 30-day all-cause mortality model against an established national benchmark on the task of mortality prediction.

View Article and Find Full Text PDF

This joint practice guideline/procedure standard was collaboratively developed by the European Association of Nuclear Medicine (EANM), the Society of Nuclear Medicine and Molecular Imaging (SNMMI), the European Association of Neuro-Oncology (EANO), and the PET task force of the Response Assessment in Neurooncology Working Group (PET/RANO). Brain metastases are the most common malignant central nervous system (CNS) tumors. PET imaging with radiolabeled amino acids and to lesser extent [F]FDG has gained considerable importance in the assessment of brain metastases, especially for the differential diagnosis between recurrent metastases and treatment-related changes which remains a limitation using conventional MRI.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!