Publications by authors named "Adam S Charles"

Article Synopsis
  • - The sparse coding model suggests that our visual system uses a limited number of features to efficiently process complex natural images, but it faced issues with complicated computation and uncertainty in fitting.
  • - A new approach called the sparse coding variational autoencoder (SVAE) combines the sparse coding model with a deep neural network for better recognition and fit to data by maximizing the evidence lower bound (ELBO).
  • - The SVAE differs from traditional variational autoencoders by having an overcomplete latent representation, a sparse prior instead of a Gaussian one, and a simpler linear decoder, and it shows improved performance on natural image data while capturing crucial neuron properties in early visual processing.
View Article and Find Full Text PDF

Systems neuroscience has experienced an explosion of new tools for reading and writing neural activity, enabling exciting new experiments such as all-optical or closed-loop control that effect powerful causal interventions. At the same time, improved computational models are capable of reproducing behavior and neural activity with increasing fidelity. Unfortunately, these advances have drastically increased the complexity of integrating different lines of research, resulting in the missed opportunities and untapped potential of suboptimal experiments.

View Article and Find Full Text PDF

Accurate tracking of the same neurons across multiple days is crucial for studying changes in neuronal activity during learning and adaptation. Advances in high-density extracellular electrophysiology recording probes, such as Neuropixels, provide a promising avenue to accomplish this goal. Identifying the same neurons in multiple recordings is, however, complicated by non-rigid movement of the tissue relative to the recording sites (drift) and loss of signal from some neurons.

View Article and Find Full Text PDF

Accurate tracking of the same neurons across multiple days is crucial for studying changes in neuronal activity during learning and adaptation. Advances in high density extracellular electrophysiology recording probes, such as Neuropixels, provide a promising avenue to accomplish this goal. Identifying the same neurons in multiple recordings is, however, complicated by non-rigid movement of the tissue relative to the recording sites (drift) and loss of signal from some neurons.

View Article and Find Full Text PDF

Background: Functional ultrasound imaging (fUS) is an emerging imaging technique that indirectly measures neural activity via changes in blood volume. Chronic fUS imaging during cognitive tasks in freely moving animals faces multiple exceptional challenges: performing large durable craniotomies with chronic implants, designing behavioral experiments matching the hemodynamic timescale, stabilizing the ultrasound probe during freely moving behavior, accurately assessing motion artifacts, and validating that the animal can perform cognitive tasks while tethered.

New Method: We provide validated solutions for those technical challenges.

View Article and Find Full Text PDF
Article Synopsis
  • * Researchers developed a deep-learning image-restoration algorithm that enhances both ex vivo and in vivo imaging, improving the ability to visualize synapses in real-time.
  • * This new method successfully tracked behavior-related changes in synaptic structures in living transgenic mice with high precision, showcasing the potential of combining advanced imaging techniques for neuroscience research.
View Article and Find Full Text PDF

Functional optical imaging in neuroscience is rapidly growing with the development of optical systems and fluorescence indicators. To realize the potential of these massive spatiotemporal datasets for relating neuronal activity to behavior and stimuli and uncovering local circuits in the brain, accurate automated processing is increasingly essential. We cover recent computational developments in the full data processing pipeline of functional optical microscopy for neuroscience data and discuss ongoing and emerging challenges.

View Article and Find Full Text PDF

Optical imaging of calcium signals in the brain has enabled researchers to observe the activity of hundreds-to-thousands of individual neurons simultaneously. Current methods predominantly use morphological information, typically focusing on expected shapes of cell bodies, to better identify neurons in the field-of-view. The explicit shape constraints limit the applicability of automated cell identification to other important imaging scales with more complex morphologies, e.

View Article and Find Full Text PDF

Population recordings of calcium activity are a major source of insight into neural function. Large datasets require automated processing, but this can introduce errors that are difficult to detect. Here we show that popular time course-estimation algorithms often contain substantial misattribution errors affecting 10-20% of transients.

View Article and Find Full Text PDF

Recent work has highlighted that many types of variables are represented in each neocortical area. How can these many neural representations be organized together without interference and coherently maintained/updated through time? We recorded from excitatory neural populations in posterior cortices as mice performed a complex, dynamic task involving multiple interrelated variables. The neural encoding implied that highly correlated task variables were represented by less-correlated neural population modes, while pairs of neurons exhibited a spectrum of signal correlations.

View Article and Find Full Text PDF

Background: The past decade has seen a multitude of new in vivo functional imaging methodologies. However, the lack of ground-truth comparisons or evaluation metrics makes the large-scale, systematic validation vital to the continued development and use of optical microscopy impossible.

New-method: We provide a new framework for evaluating two-photon microscopy methods via in silico Neural Anatomy and Optical Microscopy (NAOMi) simulation.

View Article and Find Full Text PDF

As acquiring bigger data becomes easier in experimental brain science, computational and statistical brain science must achieve similar advances to fully capitalize on these data. Tackling these problems will benefit from a more explicit and concerted effort to work together. Specifically, brain science can be further democratized by harnessing the power of community-driven tools, which both are built by and benefit from many different people with different backgrounds and expertise.

View Article and Find Full Text PDF

Neurons in many brain areas exhibit high trial-to-trial variability, with spike counts that are overdispersed relative to a Poisson distribution. Recent work (Goris, Movshon, & Simoncelli, 2014 ) has proposed to explain this variability in terms of a multiplicative interaction between a stochastic gain variable and a stimulus-dependent Poisson firing rate, which produces quadratic relationships between spike count mean and variance. Here we examine this quadratic assumption and propose a more flexible family of models that can account for a more diverse set of mean-variance relationships.

View Article and Find Full Text PDF

Two-photon laser scanning microscopy of calcium dynamics using fluorescent indicators is a widely used imaging method for large-scale recording of neural activity in vivo. Here, we introduce volumetric two-photon imaging of neurons using stereoscopy (vTwINS), a volumetric calcium imaging method that uses an elongated, V-shaped point spread function to image a 3D brain volume. Single neurons project to spatially displaced 'image pairs' in the resulting 2D image, and the separation distance between projections is proportional to depth in the volume.

View Article and Find Full Text PDF

Cortical networks are hypothesized to rely on transient network activity to support short-term memory (STM). In this letter, we study the capacity of randomly connected recurrent linear networks for performing STM when the input signals are approximately sparse in some basis. We leverage results from compressed sensing to provide rigorous nonasymptotic recovery guarantees, quantifying the impact of the input sparsity level, the input sparsity basis, and the network characteristics on the system capacity.

View Article and Find Full Text PDF

The sparse coding hypothesis has generated significant interest in the computational and theoretical neuroscience communities, but there remain open questions about the exact quantitative form of the sparsity penalty and the implementation of such a coding rule in neurally plausible architectures. The main contribution of this work is to show that a wide variety of sparsity-based probabilistic inference problems proposed in the signal processing and statistics literatures can be implemented exactly in the common network architecture known as the locally competitive algorithm (LCA). Among the cost functions we examine are approximate l(p) norms (0 ≤ p ≤ 2), modified l(p)-norms, block-l1 norms, and reweighted algorithms.

View Article and Find Full Text PDF