Publications by authors named "Ila Fiete"

Hippocampal circuits in the brain enable two distinct cognitive functions: the construction of spatial maps for navigation, and the storage of sequential episodic memories. Although there have been advances in modelling spatial representations in the hippocampus, we lack good models of its role in episodic memory. Here we present a neocortical-entorhinal-hippocampal network model that implements a high-capacity general associative memory, spatial memory and episodic memory.

View Article and Find Full Text PDF

It has been an open question in deep learning if fault-tolerant computation is possible: can arbitrarily reliable computation be achieved using only unreliable neurons? In the grid cells of the mammalian cortex, analog error correction codes have been observed to protect states against neural spiking noise, but their role in information processing is unclear. Here, we use these biological error correction codes to develop a universal fault-tolerant neural network that achieves reliable computation if the faultiness of each neuron lies below a sharp threshold; remarkably, we find that noisy biological neurons fall below this threshold. The discovery of a phase transition from faulty to fault-tolerant neural computation suggests a mechanism for reliable computation in the cortex and opens a path towards understanding noisy analog systems relevant to artificial intelligence and neuromorphic computing.

View Article and Find Full Text PDF

The relationship between neuroscience and artificial intelligence (AI) has evolved rapidly over the past decade. These two areas of study influence and stimulate each other. We invited experts to share their perspectives on this exciting intersection, focusing on current achievements, unsolved questions, and future directions.

View Article and Find Full Text PDF

Every day, hundreds of thousands of people undergo general anesthesia. One hypothesis is that anesthesia disrupts dynamic stability-the ability of the brain to balance excitability with the need to be stable and controllable. To test this hypothesis, we developed a method for quantifying changes in population-level dynamic stability in complex systems: delayed linear analysis for stability estimation (DeLASE).

View Article and Find Full Text PDF

A cognitive map is a suitably structured representation that enables novel computations using previous experience; for example, planning a new route in a familiar space. Work in mammals has found direct evidence for such representations in the presence of exogenous sensory inputs in both spatial and non-spatial domains. Here we tested a foundational postulate of the original cognitive map theory: that cognitive maps support endogenous computations without external input.

View Article and Find Full Text PDF

The activity patterns of grid cells form distinctively regular triangular lattices over the explored spatial environment and are largely invariant to visual stimuli, animal movement, and environment geometry. These neurons present numerous fascinating challenges to the curious (neuro)scientist: What are the circuit mechanisms responsible for creating spatially periodic activity patterns from the monotonic input-output responses of single neurons? How and why does the brain encode a local, nonperiodic variable-the allocentric position of the animal-with a periodic, nonlocal code? And, are grid cells truly specialized for spatial computations? Otherwise, what is their role in general cognition more broadly? We review efforts in uncovering the mechanisms and functional properties of grid cells, highlighting recent progress in the experimental validation of mechanistic grid cell models, and discuss the coding properties and functional advantages of the grid code as suggested by continuous attractor network models of grid cells.

View Article and Find Full Text PDF

Work on deep learning-based models of grid cells suggests that grid cells generically and robustly arise from optimizing networks to path integrate, i.e., track one's spatial position by integrating self-velocity signals.

View Article and Find Full Text PDF

The sensory cortex amplifies relevant features of external stimuli. This sensitivity and selectivity arise through the transformation of inputs by cortical circuitry. We characterize the circuit mechanisms and dynamics of cortical amplification by making large-scale simultaneous measurements of single cells in awake primates and testing computational models.

View Article and Find Full Text PDF

Recurrent neural networks (RNNs) are often used to model circuits in the brain and can solve a variety of difficult computational problems requiring memory, error correction, or selection (Hopfield, 1982; Maass et al., 2002; Maass, 2011). However, fully connected RNNs contrast structurally with their biological counterparts, which are extremely sparse (about 0.

View Article and Find Full Text PDF

In this Review, we describe the singular success of attractor neural network models in describing how the brain maintains persistent activity states for working memory, corrects errors and integrates noisy cues. We consider the mechanisms by which simple and forgetful units can organize to collectively generate dynamics on the long timescales required for such computations. We discuss the myriad potential uses of attractor dynamics for computation in the brain, and showcase notable examples of brain systems in which inherently low-dimensional continuous-attractor dynamics have been concretely and rigorously identified.

View Article and Find Full Text PDF

The ability to associate temporally segregated information and assign positive or negative valence to environmental cues is paramount for survival. Studies have shown that different projections from the basolateral amygdala (BLA) are potentiated following reward or punishment learning. However, we do not yet understand how valence-specific information is routed to the BLA neurons with the appropriate downstream projections, nor do we understand how to reconcile the sub-second timescales of synaptic plasticity with the longer timescales separating the predictive cues from their outcomes.

View Article and Find Full Text PDF

Most social species self-organize into dominance hierarchies, which decreases aggression and conserves energy, but it is not clear how individuals know their social rank. We have only begun to learn how the brain represents social rank and guides behaviour on the basis of this representation. The medial prefrontal cortex (mPFC) is involved in social dominance in rodents and humans.

View Article and Find Full Text PDF
Ila Fiete.

Curr Biol

December 2021

Interview with Ila Fiete, who studies the microscopic cellular and synaptic processes responsible for behaviors of memory and cognition in the brain at Massachusetts Institute of Technology.

View Article and Find Full Text PDF

What factors constrain the arrangement of the multiple fields of a place cell? By modeling place cells as perceptrons that act on multiscale periodic grid-cell inputs, we analytically enumerate a place cell's - how many field arrangements it can realize without external cues while its grid inputs are unique - and derive its - the spatial range over which it can achieve any field arrangement. We show that the repertoire is very large and relatively noise-robust. However, the repertoire is a vanishing fraction of all arrangements, while capacity scales only as the sum of the grid periods so field arrangements are constrained over larger distances.

View Article and Find Full Text PDF

Whittington et al. demonstrate how network architectures defined in a spatial context may be useful for inference on different types of relational knowledge. These architectures allow for learning the structure of the environment and then transferring that knowledge to allow prediction of novel transitions.

View Article and Find Full Text PDF

An elemental computation in the brain is to identify the best in a set of options and report its value. It is required for inference, decision-making, optimization, action selection, consensus, and foraging. Neural computing is considered powerful because of its parallelism; however, it is unclear whether neurons can perform this max-finding operation in a way that improves upon the prohibitively slow optimal serial max-finding computation (which takes [Formula: see text] time for N noisy candidate options) by a factor of N, the benchmark for parallel computation.

View Article and Find Full Text PDF

Large scientific projects in genomics and astronomy are influential not because they answer any single question but because they enable investigation of continuously arising new questions from the same data-rich sources. Advances in automated mapping of the brain's synaptic connections (connectomics) suggest that the complicated circuits underlying brain function are ripe for analysis. We discuss benefits of mapping a mouse brain at the level of synapses.

View Article and Find Full Text PDF

Understanding the mechanisms of neural computation and learning will require knowledge of the underlying circuitry. Because it is difficult to directly measure the wiring diagrams of neural circuits, there has long been an interest in estimating them algorithmically from multicell activity recordings. We show that even sophisticated methods, applied to unlimited data from every cell in the circuit, are biased toward inferring connections between unconnected but highly correlated neurons.

View Article and Find Full Text PDF

Path integration plays a vital role in navigation: it enables the continuous tracking of one's position in space by integrating self-motion cues. Path integration abilities vary widely across individuals, and tend to deteriorate in old age. The specific causes of path integration errors, however, remain poorly characterized.

View Article and Find Full Text PDF

We shed light on the potential of entorhinal grid cells to efficiently encode variables of dimension greater than two, while remaining faithful to empirical data on their low-dimensional structure. Our model constructs representations of high-dimensional inputs through a combination of low-dimensional random projections and "classical" low-dimensional hexagonal grid cell responses. Without reconfiguration of the recurrent circuit, the same system can flexibly encode multiple variables of different dimensions while maximizing the coding range (per dimension) by automatically trading-off dimension with an exponentially large coding range.

View Article and Find Full Text PDF

Neural circuits construct distributed representations of key variables-external stimuli or internal constructs of quantities relevant for survival, such as an estimate of one's location in the world-as vectors of population activity. Although population activity vectors may have thousands of entries (dimensions), we consider that they trace out a low-dimensional manifold whose dimension and topology match the represented variable. This manifold perspective enables blind discovery and decoding of the represented variable using only neural population activity (without knowledge of the input, output, behavior or topography).

View Article and Find Full Text PDF