Deep learning methods are increasingly being applied to raw electroencephalogram (EEG) data. However, if these models are to be used in clinical or research contexts, methods to explain them must be developed, and if these models are to be used in research contexts, methods for combining explanations across large numbers of models must be developed to counteract the inherent randomness of existing training approaches. Model visualization-based explainability methods for EEG involve structuring a model architecture such that its extracted features can be characterized and have the potential to offer highly useful insights into the patterns that they uncover. Nevertheless, model visualization-based explainability methods have been underexplored within the context of multichannel EEG, and methods to combine their explanations across folds have not yet been developed. In this study, we present two novel convolutional neural network-based architectures and apply them for automated major depressive disorder diagnosis. Our models obtain slightly lower classification performance than a baseline architecture. However, across 50 training folds, they find that individuals with MDD exhibit higher β power, potentially higher δ power, and higher brain-wide correlation that is most strongly represented within the right hemisphere. This study provides multiple key insights into MDD and represents a significant step forward for the domain of explainable deep learning applied to raw EEG. We hope that it will inspire future efforts that will eventually enable the development of explainable EEG deep learning models that can contribute both to clinical care and novel medical research discoveries.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC10983917 | PMC |
http://dx.doi.org/10.1101/2024.03.19.585728 | DOI Listing |
Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!