Neural prostheses can compensate for functional losses caused by blocked neural pathways by modeling neural activities among cortical areas. Existing methods generally utilize point process models to predict neural spikes from one area to another, and optimize the model by maximizing the log-likelihood between model predictions and recorded activities of individual neurons. However, single-neuron recordings can be distorted, while neuron population activity tends to reside within a stable subspace called the neural manifold, which reflects the connectivity and correlation among output neurons. This paper proposes a neural manifold constraint to modify the loss function for model training. The constraint term minimizes the distance from model predictions to the empirical manifold to amend the model predictions from distorted recordings. We test our methods on synthetic data with distortion on output spike trains and evaluate the similarity between model predictions and original output spike trains by the Kolmogorov-Smirnov test. The results show that the models trained with constraint have higher goodness-of-fit than those trained without constraint, which indicates the potential better approach for neural prostheses in noisy environments.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1109/EMBC40787.2023.10340489 | DOI Listing |
Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!