Neuroscience research has made immense progress over the last decade, but our understanding of the brain remains fragmented and piecemeal: the dream of probing an arbitrary brain region and automatically reading out the information encoded in its neural activity remains out of reach. In this work, we build towards a first foundation model for neural spiking data that can solve a diverse set of tasks across multiple brain areas. We introduce a novel self-supervised modeling approach for population activity in which the model alternates between masking out and reconstructing neural activity across different time steps, neurons, and brain regions.
View Article and Find Full Text PDFOur ability to use deep learning approaches to decipher neural activity would likely benefit from greater scale, in terms of both model size and datasets. However, the integration of many neural recordings into one unified model is challenging, as each recording contains the activity of different neurons from different individual animals. In this paper, we introduce a training framework and architecture designed to model the population dynamics of neural activity across diverse, large-scale neural recordings.
View Article and Find Full Text PDFMessage passing neural networks have shown a lot of success on graph-structured data. However, there are many instances where message passing can lead to over-smoothing or fail when neighboring nodes belong to different classes. In this work, we introduce a simple yet general framework for improving learning in message passing neural networks.
View Article and Find Full Text PDFInt IEEE EMBS Conf Neural Eng
April 2023
Human behavior is incredibly complex and the factors that drive decision making-from instinct, to strategy, to biases between individuals-often vary over multiple timescales. In this paper, we design a predictive framework that learns representations to encode an individual's 'behavioral style', i.e.
View Article and Find Full Text PDFInt IEEE EMBS Conf Neural Eng
April 2023
Finding points in time where the distribution of neural responses changes (change points) is an important step in many neural data analysis pipelines. However, in complex and free behaviors, where we see different types of shifts occurring at different rates, it can be difficult to use existing methods for change point (CP) detection because they can't necessarily handle different types of changes that may occur in the underlying neural distribution. Additionally, response changes are often sparse in high dimensional neural recordings, which can make existing methods detect spurious changes.
View Article and Find Full Text PDFAdv Neural Inf Process Syst
December 2022
Complex time-varying systems are often studied by abstracting away from the dynamics of individual components to build a model of the population-level dynamics from the start. However, when building a population-level description, it can be easy to lose sight of each individual and how they contribute to the larger picture. In this paper, we present a novel transformer architecture for learning from time-varying data that builds descriptions of both the individual as well as the collective population dynamics.
View Article and Find Full Text PDFCell type is hypothesized to be a key determinant of a neuron's role within a circuit. Here, we examine whether a neuron's transcriptomic type influences the timing of its activity. We develop a deep-learning architecture that learns features of interevent intervals across timescales (ms to >30 min).
View Article and Find Full Text PDFHuman behavior is incredibly complex and the factors that drive decision making--from instinct, to strategy, to biases between individuals--often vary over multiple timescales. In this paper, we design a predictive framework that learns representations to encode an individual's 'behavioral style', i.e.
View Article and Find Full Text PDFAdv Neural Inf Process Syst
December 2021
Meaningful and simplified representations of neural activity can yield insights into and information is being processed within a neural circuit. However, without labels, finding representations that reveal the link between the brain and behavior can be challenging. Here, we introduce a novel unsupervised approach for learning disentangled representations of neural activity called .
View Article and Find Full Text PDFAdv Neural Inf Process Syst
January 2022
There are multiple scales of abstraction from which we can describe the same image, depending on whether we are focusing on fine-grained details or a more global attribute of the image. In brain mapping, learning to automatically parse images to build representations of both small-scale features (e.g.
View Article and Find Full Text PDFOptimal transport (OT) is a widely used technique for distribution alignment, with applications throughout the machine learning, graphics, and vision communities. Without any additional structural assumptions on transport, however, OT can be fragile to outliers or noise, especially in high dimensions. Here, we introduce Latent Optimal Transport (LOT), a new approach for OT that simultaneously learns low-dimensional structure in data while leveraging this structure to solve the alignment task.
View Article and Find Full Text PDF