As the leading cause of dementia worldwide, Alzheimer's Disease (AD) has prompted significant interest in developing Deep Learning (DL) approaches for its classification. However, it currently remains unclear whether these models rely on established biological indicators. This work compares a novel DL model using structural connectivity (namely, BC-GCN-SE adapted from functional connectivity tasks) with an established model using structural magnetic resonance imaging (MRI) scans (namely, ResNet18). Unlike most studies primarily focusing on performance, our work places explainability at the forefront. Specifically, we define a novel Explainable Artificial Intelligence (XAI) metric, based on gradient-weighted class activation mapping. Its aim is quantitatively measuring how effectively these models fare against established AD biomarkers in their decision-making. The XAI assessment was conducted across 132 brain parcels. Results were compared to AD-relevant regions to measure adherence to domain knowledge. Then, differences in explainability patterns between the two models were assessed to explore the insights offered by each piece of data (i.e., MRI vs. connectivity). Classification performance was satisfactory in terms of both the median true positive (ResNet18: 0.817, BC-GCN-SE: 0.703) and true negative rates (ResNet18: 0.816; BC-GCN-SE: 0.738). Statistical tests ( < 0.05) and ranking of the 15% most relevant parcels revealed the involvement of target areas: the medial temporal lobe for ResNet18 and the default mode network for BC-GCN-SE. Additionally, our findings suggest that different imaging modalities provide complementary information to DL models. This lays the foundation for bioengineering advancements in developing more comprehensive and trustworthy DL models, potentially enhancing their applicability as diagnostic support tools for neurodegenerative diseases.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.3390/bioengineering12010082 | DOI Listing |
Sensors (Basel)
January 2025
Industrial Systems Institute (ISI), Athena Research and Innovation Center, 26504 Patras, Greece.
The integration of deep learning (DL) into image processing has driven transformative advancements, enabling capabilities far beyond the reach of traditional methodologies. This survey offers an in-depth exploration of the DL approaches that have redefined image processing, tracing their evolution from early innovations to the latest state-of-the-art developments. It also analyzes the progression of architectural designs and learning paradigms that have significantly enhanced the ability to process and interpret complex visual data.
View Article and Find Full Text PDFJ Clin Med
January 2025
Department of Neurosurgery, "Carol Davila" University of Medicine and Pharmacy, 020021 Bucharest, Romania.
The convergence of Artificial Intelligence (AI) and neuroscience is redefining our understanding of the brain, unlocking new possibilities in research, diagnosis, and therapy. This review explores how AI's cutting-edge algorithms-ranging from deep learning to neuromorphic computing-are revolutionizing neuroscience by enabling the analysis of complex neural datasets, from neuroimaging and electrophysiology to genomic profiling. These advancements are transforming the early detection of neurological disorders, enhancing brain-computer interfaces, and driving personalized medicine, paving the way for more precise and adaptive treatments.
View Article and Find Full Text PDFJ Clin Med
January 2025
Precision Medicine Unit, Department of Gastroenterology, São João University Hospital, 4200-427 Porto, Portugal.
An important impediment to the incorporation of artificial intelligence-based tools into healthcare is their association with so-called black box medicine, a concept arising due to their complexity and the difficulties in understanding how they reach a decision. This situation may compromise the clinician's trust in these tools, should any errors occur, and the inability to explain how decisions are reached may affect their relationship with patients. Explainable AI (XAI) aims to overcome this limitation by facilitating a better understanding of how AI models reach their conclusions for users, thereby enhancing trust in the decisions reached.
View Article and Find Full Text PDFJ Clin Med
January 2025
Department of Pulmonary Medicine, Istanbul Oncology Hospital, Istanbul 34846, Türkiye.
We aimed to describe the cardiopulmonary function during exercise and the health-related quality of life (HRQoL) in patients with a history of COVID-19 pneumonia, stratified by chest computed tomography (CT) findings at baseline. Among 77 consecutive patients with COVID-19 who were discharged from the Pulmonology Ward between March 2020 and April 2021, 28 (mean age 54.3 ± 8.
View Article and Find Full Text PDFMedicina (Kaunas)
December 2024
Graduate Institute of Business Administration, Fu Jen Catholic University, New Taipei City 242, Taiwan.
The rising prevalence of myopia is a significant global health concern. Atropine eye drops are commonly used to slow myopia progression in children, but their long-term use raises concern about intraocular pressure (IOP). This study uses SHapley Additive exPlanations (SHAP) to improve the interpretability of machine learning (ML) model predicting end IOP, offering clinicians explainable insights for personalized patient management.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!