Rapid advances in artificial intelligence (AI) and availability of biological, medical, and healthcare data have enabled the development of a wide variety of models. Significant success has been achieved in a wide range of fields, such as genomics, protein folding, disease diagnosis, imaging, and clinical tasks. Although widely used, the inherent opacity of deep AI models has brought criticism from the research field and little adoption in clinical practice. Concurrently, there has been a significant amount of research focused on making such methods more interpretable, reviewed here, but inherent critiques of such explainability in AI (XAI), its requirements, and concerns with fairness/robustness have hampered their real-world adoption. We here discuss how user-driven XAI can be made more useful for different healthcare stakeholders through the definition of three key personas-data scientists, clinical researchers, and clinicians-and present an overview of how different XAI approaches can address their needs. For illustration, we also walk through several research and clinical examples that take advantage of XAI open-source tools, including those that help enhance the explanation of the results through visualization. This perspective thus aims to provide a guidance tool for developing explainability solutions for healthcare by empowering both subject matter experts, providing them with a survey of available tools, and explainability developers, by providing examples of how such methods can influence in practice adoption of solutions.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC9122967PMC
http://dx.doi.org/10.1016/j.patter.2022.100493DOI Listing

Publication Analysis

Top Keywords

human-centered explainability
4
explainability life
4
life sciences
4
healthcare
4
sciences healthcare
4
healthcare medical
4
medical informatics
4
informatics rapid
4
rapid advances
4
advances artificial
4

Similar Publications

The Pivotal Role of Baseline LDCT for Lung Cancer Screening in the Era of Artificial Intelligence.

Arch Bronconeumol

November 2024

Department of Experimental and Clinical Biomedical Sciences "Mario Serio", University of Florence, 50139 Florence, Italy. Electronic address:

In this narrative review, we address the ongoing challenges of lung cancer (LC) screening using chest low-dose computerized tomography (LDCT) and explore the contributions of artificial intelligence (AI), in overcoming them. We focus on evaluating the initial (baseline) LDCT examination, which provides a wealth of information relevant to the screening participant's health. This includes the detection of large-size prevalent LC and small-size malignant nodules that are typically diagnosed as LCs upon growth in subsequent annual LDCT scans.

View Article and Find Full Text PDF

Explainable Artificial Intelligence (XAI) aims to provide insights into the inner workings and the outputs of AI systems. Recently, there's been growing recognition that explainability is inherently human-centric, tied to how people perceive explanations. Despite this, there is no consensus in the research community on whether user evaluation is crucial in XAI, and if so, what exactly needs to be evaluated and how.

View Article and Find Full Text PDF
Article Synopsis
  • The study explores the growing use of deep learning algorithms in radiology for diagnostic support, emphasizing the need for Explainable AI (XAI) to enhance transparency and trust among healthcare professionals.
  • A user study evaluated two visual XAI techniques (Grad-CAM and LIME) in diagnosing pneumonia and COVID-19 from chest images, achieving high accuracy rates of 90% and 98%, respectively.
  • Despite generally positive perceptions of XAI systems, participants showed limited awareness of their practical benefits, with Grad-CAM being favored for coherency and trust, though concerns about its usability in clinical settings were noted.
View Article and Find Full Text PDF

Background: AI-powered Digital Therapeutics (DTx) hold potential for enhancing stress prevention by promoting the scalability of P5 Medicine, which may offer users coping skills and improved self-management of mental wellbeing. However, adoption rates remain low, often due to insufficient user and stakeholder involvement during the design phases.

Objective: This study explores the human-centered design potentials of SHIVA, a DTx integrating virtual reality and AI with the SelfHelp+ intervention, aiming to understand stakeholder views and expectations that could influence its adoption.

View Article and Find Full Text PDF

Towards explainable oral cancer recognition: Screening on imperfect images via Informed Deep Learning and Case-Based Reasoning.

Comput Med Imaging Graph

October 2024

Department Di.Chir.On.S, University of Palermo, Palermo, Italy; Unit of Oral Medicine and Dentistry for fragile patients, Department of Rehabilitation, fragility, and continuity of care University Hospital Palermo, Palermo, Italy.

Oral squamous cell carcinoma recognition presents a challenge due to late diagnosis and costly data acquisition. A cost-efficient, computerized screening system is crucial for early disease detection, minimizing the need for expert intervention and expensive analysis. Besides, transparency is essential to align these systems with critical sector applications.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!