Neural network models have become a popular machine-learning technique for the toxicity prediction of chemicals. However, due to their complex structure, it is difficult to understand predictions made by these models which limits confidence. Current techniques to tackle this problem such as SHAP or integrated gradients provide insights by attributing importance to the input features of individual compounds. While these methods have produced promising results in some cases, they do not shed light on how representations of compounds are transformed in hidden layers, which constitute how neural networks learn. We present a novel technique to interpret neural networks which identifies chemical substructures in training data found to be responsible for the activation of hidden neurons. For individual test compounds, the importance of hidden neurons is determined, and the associated substructures are leveraged to explain the model prediction. Using structural alerts for mutagenicity from the Derek Nexus expert system as ground truth, we demonstrate the validity of the approach and show that model explanations are competitive with and complementary to explanations obtained from an established feature attribution method.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC11094726 | PMC |
http://dx.doi.org/10.1021/acs.jcim.4c00127 | DOI Listing |
CNS Neurosci Ther
January 2025
Department of Neurology, The Affiliated Brain Hospital of Nanjing Medical University, Nanjing, China.
Objectives: Parkinson's disease (PD) is characterized by olfactory dysfunction (OD) and cognitive deficits at its early stages, yet the link between OD and cognitive deficits is also not well-understood. This study aims to examine the changes in the olfactory network associated with OD and their relationship with cognitive function in de novo PD patients.
Methods: A total of 116 drug-naïve PD patients and 51 healthy controls (HCs) were recruited for this study.
Unlabelled: Neurophysiology studies propose that predictive coding is implemented via alpha/beta (8-30 Hz) rhythms that prepare specific pathways to process predicted inputs. This leads to a state of relative inhibition, reducing feedforward gamma (40-90 Hz) rhythms and spiking to predictable inputs. We refer to this model as predictive routing.
View Article and Find Full Text PDFAnimals capable of complex behaviors tend to have more distinct brain areas than simpler organisms, and artificial networks that perform many tasks tend to self-organize into modules (1-3). This suggests that different brain areas serve distinct functions supporting complex behavior. However, a common observation is that essentially anything that an animal senses, knows, or does can be decoded from neural activity in any brain area (4-6).
View Article and Find Full Text PDFTaiwan J Ophthalmol
November 2024
Sirindhorn International Institute of Technology, Thammasat University, Bangkok, Thailand.
Recent advances of artificial intelligence (AI) in retinal imaging found its application in two major categories: discriminative and generative AI. For discriminative tasks, conventional convolutional neural networks (CNNs) are still major AI techniques. Vision transformers (ViT), inspired by the transformer architecture in natural language processing, has emerged as useful techniques for discriminating retinal images.
View Article and Find Full Text PDFBio Protoc
January 2025
Center for Translational Neuromedicine, University of Copenhagen, Copenhagen, Denmark.
Magnetic resonance imaging (MRI) is an invaluable method of choice for anatomical and functional in vivo imaging of the brain. Still, accurate delineation of the brain structures remains a crucial task of MR image evaluation. This study presents a novel analytical algorithm developed in MATLAB for the automatic segmentation of cerebrospinal fluid (CSF) spaces in preclinical non-contrast MR images of the mouse brain.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!