An empirical comparison of deep learning explainability approaches for EEG using simulated ground truth.

Sci Rep

Noninvasive Brain-Machine Interface System Laboratory, Department of Electrical and Computer Engineering, University of Houston, Houston, 77204, USA.

Published: October 2023

AI Article Synopsis

  • Recent advancements in machine learning, specifically deep learning neural decoders, have enhanced the decoding capabilities of scalp EEG but highlight a lack of interpretability in these models.
  • A study was conducted to compare various model explanation methods for EEG data, identifying strengths and weaknesses in their reliability, especially under altered conditions.
  • The findings revealed that while many visualization methods struggled with consistency, DeepLift stood out for its accuracy and robustness in capturing key EEG attributes, providing guidance on effective explanation techniques for deep learning models.

Article Abstract

Recent advancements in machine learning and deep learning (DL) based neural decoders have significantly improved decoding capabilities using scalp electroencephalography (EEG). However, the interpretability of DL models remains an under-explored area. In this study, we compared multiple model explanation methods to identify the most suitable method for EEG and understand when some of these approaches might fail. A simulation framework was developed to evaluate the robustness and sensitivity of twelve back-propagation-based visualization methods by comparing to ground truth features. Multiple methods tested here showed reliability issues after randomizing either model weights or labels: e.g., the saliency approach, which is the most used visualization technique in EEG, was not class or model-specific. We found that DeepLift was consistently accurate as well as robust to detect the three key attributes tested here (temporal, spatial, and spectral precision). Overall, this study provides a review of model explanation methods for DL-based neural decoders and recommendations to understand when some of these methods fail and what they can capture in EEG.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC10584975PMC
http://dx.doi.org/10.1038/s41598-023-43871-8DOI Listing

Publication Analysis

Top Keywords

deep learning
8
ground truth
8
neural decoders
8
model explanation
8
explanation methods
8
eeg
5
methods
5
empirical comparison
4
comparison deep
4
learning explainability
4

Similar Publications

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!