Continuous attractor neural networks are used extensively to model a variety of experimentally observed coherent brain states, ranging from cortical waves of activity to stationary activity bumps. The latter are thought to play an important role in various forms of neural information processing, including population coding in primary visual cortex (V1) and working memory in prefrontal cortex. However, one limitation of continuous attractor networks is that the location of the peak of an activity bump (or wave) can diffuse due to intrinsic network noise. This reflects marginal stability of bump solutions with respect to the action of an underlying continuous symmetry group. Previous studies have used perturbation theory to derive an approximate stochastic differential equation for the location of the peak (phase) of the bump. Although this method captures the diffusive wandering of a bump solution, it ignores fluctuations in the amplitude of the bump. In this paper, we show how amplitude fluctuations can be analyzed by reducing the underlying stochastic neural field equation to a finite-dimensional stochastic gradient dynamical system that tracks the stochastic motion of both the amplitude and phase of bump solutions. This allows us to derive exact expressions for the steady-state probability density and its moments, which are then used to investigate two major issues: (i) the input-dependent suppression of neural variability and (ii) noise-induced transitions to bump extinction. We develop the theory by considering the particular example of a ring attractor network with SO(2) symmetry, which is the most common architecture used in attractor models of working memory and population tuning in V1. However, we also extend the analysis to a higher-dimensional spherical attractor network with SO(3) symmetry which has previously been proposed as a model of orientation and spatial frequency tuning in V1. We thus establish how a combination of stochastic analysis and group theoretic methods provides a powerful tool for investigating the effects of noise in continuous attractor networks.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1103/PhysRevE.100.012402 | DOI Listing |
Int J Bipolar Disord
December 2024
Department for Psychiatry, Psychosomatic Medicine and Psychotherapy, University Hospital Frankfurt-Goethe University, Frankfurt am Main, Germany.
Background: Attention-deficit/hyperactivity disorder (ADHD) is a common neuro-developmental disorder that often persists into adulthood. Moreover, it is frequently accompanied by bipolar disorder (BD) as well as borderline personality disorder (BPD). It is unclear whether these disorders share underlying pathomechanisms, given that all three are characterized by alterations in affective states, either long or short-term.
View Article and Find Full Text PDFNeurosci Biobehav Rev
December 2024
BabyDevLab, School of Psychology, University of East London, Water Lane, London, E15 4LZ, UK.
During early life, we develop the ability to choose what we focus on and what we ignore, allowing us to regulate perception and action in complex environments. But how does this change influence how we spontaneously allocate attention to real-world objects during free behaviour? Here, in this narrative review, we examine this question by considering the time dynamics of spontaneous overt visual attention, and how these develop through early life. Even in early childhood, visual attention shifts occur both periodically and aperiodically.
View Article and Find Full Text PDFCogn Neurodyn
December 2024
Research Centre of Mathematics, University of Minho, Guimarães, Portugal.
Continuous bump attractor networks (CANs) have been widely used in the past to explain the phenomenology of working memory (WM) tasks in which continuous-valued information has to be maintained to guide future behavior. Standard CAN models suffer from two major limitations: the stereotyped shape of the bump attractor does not reflect differences in the representational quality of WM items and the recurrent connections within the network require a biologically unrealistic level of fine tuning. We address both challenges in a two-dimensional (2D) network model formalized by two coupled neural field equations of Amari type.
View Article and Find Full Text PDFPNAS Nexus
December 2024
SISSA, Scuola Internazionale Superiore di Studi Avanzati, Cognitive Neuroscience, Trieste 34136, Italy.
Recent research involving bats flying in long tunnels has confirmed that hippocampal place cells can be active at multiple locations, with considerable variability in place field size and peak rate. With self-organizing recurrent networks, variability implies inhomogeneity in the synaptic weights, impeding the establishment of a continuous manifold of fixed points. Are continuous attractor neural networks still valid models for understanding spatial memory in the hippocampus, given such variability? Here, we ask what are the noise limits, in terms of an experimentally inspired parametrization of the irregularity of a single map, beyond which the notion of continuous attractor is no longer relevant.
View Article and Find Full Text PDFElife
December 2024
Leloir Institute - IIBBA/CONICET, Buenos Aires, Argentina.
Entorhinal grid cells implement a spatial code with hexagonal periodicity, signaling the position of the animal within an environment. Grid maps of cells belonging to the same module share spacing and orientation, only differing in relative two-dimensional spatial phase, which could result from being part of a two-dimensional attractor guided by path integration. However, this architecture has the drawbacks of being complex to construct and rigid, path integration allowing for no deviations from the hexagonal pattern such as the ones observed under a variety of experimental manipulations.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!