We present a self-supervised framework that learns population-level codes for arbitrary ensembles of neural recordings at scal. We address two key challenges in scaling models with neural time-series data: sparse and variable electrode distribution across subjects and datasets. The Population Transformer (PopT) stacks on top of pretrained representations and enhances downstream decoding by enabling learned aggregation of multiple spatially-sparse data channels. The pretrained PopT lowers the amount of data required for downstream decoding experiments, while increasing accuracy, even on held-out subjects and tasks. Compared to end-to-end methods, this approach is computationally lightweight and more interpretable, while still retaining competitive performance. We further show how our framework is generalizable to multiple time-series embeddings and neural data modalities. Beyond decoding, we interpret the pretrained PopT and fine-tuned models to show how they can be used to extract neuroscience insights from massive amounts of data. We release our code as well as a pretrained PopT to enable off-the-shelf improvements in multi-channel intracranial data decoding and interpretability.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC11177958PMC

Publication Analysis

Top Keywords

pretrained popt
12
population transformer
8
downstream decoding
8
data
6
transformer learning
4
learning population-level
4
population-level representations
4
neural
4
representations neural
4
neural activity
4

Similar Publications

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!