Discrete-Attractor-like Tracking in Continuous Attractor Neural Networks.

Phys Rev Lett

RIKEN Center for Brain Science, Hirosawa 2-1, Wako City, Saitama 351-0198, Japan.

Published: January 2019

Continuous attractor neural networks generate a set of smoothly connected attractor states. In memory systems of the brain, these attractor states may represent continuous pieces of information such as spatial locations and head directions of animals. However, during the replay of previous experiences, hippocampal neurons show a discontinuous sequence in which discrete transitions of the neural state are phase locked with the slow-gamma (∼30-50  Hz) oscillation. Here, we explore the underlying mechanisms of the discontinuous sequence generation. We find that a continuous attractor neural network has several phases depending on the interactions between external input and local inhibitory feedback. The discrete-attractor-like behavior naturally emerges in one of these phases without any discreteness assumption. We propose that the dynamics of continuous attractor neural networks is the key to generate discontinuous state changes phase locked to the brain rhythm.

Download full-text PDF

Source
http://dx.doi.org/10.1103/PhysRevLett.122.018102DOI Listing

Publication Analysis

Top Keywords

continuous attractor
16
attractor neural
16
neural networks
12
attractor states
8
discontinuous sequence
8
phase locked
8
attractor
6
continuous
5
neural
5
discrete-attractor-like tracking
4

Similar Publications

Investigating the intrinsic top-down dynamics of deep generative models.

Sci Rep

January 2025

Department of General Psychology and Padova Neuroscience Center, University of Padova, Padova, Italy.

Hierarchical generative models can produce data samples based on the statistical structure of their training distribution. This capability can be linked to current theories in computational neuroscience, which propose that spontaneous brain activity at rest is the manifestation of top-down dynamics of generative models detached from action-perception cycles. A popular class of hierarchical generative models is that of Deep Belief Networks (DBNs), which are energy-based deep learning architectures that can learn multiple levels of representations in a completely unsupervised way exploiting Hebbian-like learning mechanisms.

View Article and Find Full Text PDF

Biological memory networks are thought to store information by experience-dependent changes in the synaptic connectivity between assemblies of neurons. Recent models suggest that these assemblies contain both excitatory and inhibitory neurons (E/I assemblies), resulting in co-tuning and precise balance of excitation and inhibition. To understand computational consequences of E/I assemblies under biologically realistic constraints we built a spiking network model based on experimental data from telencephalic area Dp of adult zebrafish, a precisely balanced recurrent network homologous to piriform cortex.

View Article and Find Full Text PDF

Background: Attention-deficit/hyperactivity disorder (ADHD) is a common neuro-developmental disorder that often persists into adulthood. Moreover, it is frequently accompanied by bipolar disorder (BD) as well as borderline personality disorder (BPD). It is unclear whether these disorders share underlying pathomechanisms, given that all three are characterized by alterations in affective states, either long or short-term.

View Article and Find Full Text PDF

During early life, we develop the ability to choose what we focus on and what we ignore, allowing us to regulate perception and action in complex environments. But how does this change influence how we spontaneously allocate attention to real-world objects during free behaviour? Here, in this narrative review, we examine this question by considering the time dynamics of spontaneous overt visual attention, and how these develop through early life. Even in early childhood, visual attention shifts occur both periodically and aperiodically.

View Article and Find Full Text PDF

Continuous bump attractor networks (CANs) have been widely used in the past to explain the phenomenology of working memory (WM) tasks in which continuous-valued information has to be maintained to guide future behavior. Standard CAN models suffer from two major limitations: the stereotyped shape of the bump attractor does not reflect differences in the representational quality of WM items and the recurrent connections within the network require a biologically unrealistic level of fine tuning. We address both challenges in a two-dimensional (2D) network model formalized by two coupled neural field equations of Amari type.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!