Recent studies show that, even in constant environments, the tuning of single neurons changes over time in a variety of brain regions. This representational drift has been suggested to be a consequence of continuous learning under noise, but its properties are still not fully understood. To investigate the underlying mechanism, we trained an artificial network on a simplified navigational task. The network quickly reached a state of high performance, and many units exhibited spatial tuning. We then continued training the network and noticed that the activity became sparser with time. Initial learning was orders of magnitude faster than ensuing sparsification. This sparsification is consistent with recent results in machine learning, in which networks slowly move within their solution space until they reach a flat area of the loss function. We analyzed four datasets from different labs, all demonstrating that CA1 neurons become sparser and more spatially informative with exposure to the same environment. We conclude that learning is divided into three overlapping phases: (i) Fast familiarity with the environment; (ii) slow implicit regularization; (iii) a steady state of null drift. The variability in drift dynamics opens the possibility of inferring learning algorithms from observations of drift statistics.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC10871206 | PMC |
http://dx.doi.org/10.1101/2023.05.04.539512 | DOI Listing |
Cell Rep
January 2025
Department of Biology, Boston University, Boston, MA 02215, USA; Center for Neurophotonics, Boston University, Boston, MA 02215, USA; Department of Biomedical Engineering, Boston University, Boston, MA 02215, USA; Center for Systems Neuroscience, Boston University, Boston MA 02215, USA. Electronic address:
Nat Commun
January 2025
Life and Health Sciences Research Institute (ICVS), School of Medicine, University of Minho, Braga, Portugal.
The nucleus accumbens (NAc) is a key brain region for motivated behaviors, yet how distinct neuronal populations encode appetitive or aversive stimuli remains undetermined. Using microendoscopic calcium imaging in mice, we tracked NAc shell D1- or D2-medium spiny neurons' (MSNs) activity during exposure to stimuli of opposing valence and associative learning. Despite drift in individual neurons' coding, both D1- and D2-population activity was sufficient to discriminate opposing valence unconditioned stimuli, but not predictive cues.
View Article and Find Full Text PDFCogn Neurodyn
December 2024
Research Centre of Mathematics, University of Minho, Guimarães, Portugal.
Continuous bump attractor networks (CANs) have been widely used in the past to explain the phenomenology of working memory (WM) tasks in which continuous-valued information has to be maintained to guide future behavior. Standard CAN models suffer from two major limitations: the stereotyped shape of the bump attractor does not reflect differences in the representational quality of WM items and the recurrent connections within the network require a biologically unrealistic level of fine tuning. We address both challenges in a two-dimensional (2D) network model formalized by two coupled neural field equations of Amari type.
View Article and Find Full Text PDFNPJ Sci Learn
December 2024
KU Leuven, Leuven, Belgium.
Perception and perceptual memory play crucial roles in fear generalization, yet their dynamic interaction remains understudied. This research (N = 80) explored their relationship through a classical differential conditioning experiment. Results revealed that while fear context perception fluctuates over time with a drift effect, perceptual memory remains stable, creating a disjunction between the two systems.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!