Spatial navigation has received much attention from neuroscientists, leading to the identification of key brain areas and the discovery of numerous spatially selective cells. Despite this progress, our understanding of how the pieces fit together to drive behavior is generally lacking. We argue that this is partly caused by insufficient communication between behavioral and neuroscientific researchers.
View Article and Find Full Text PDFReinforcement learning (RL) has become a popular paradigm for modeling animal behavior, analyzing neuronal representations, and studying their emergence during learning. This development has been fueled by advances in understanding the role of RL in both the brain and artificial intelligence. However, while in machine learning a set of tools and standardized benchmarks facilitate the development of new methods and their comparison to existing ones, in neuroscience, the software infrastructure is much more fragmented.
View Article and Find Full Text PDFIn general, strategies for spatial navigation could employ one of two spatial reference frames: egocentric or allocentric. Notwithstanding intuitive explanations, it remains unclear however under what circumstances one strategy is chosen over another, and how neural representations should be related to the chosen strategy. Here, we first use a deep reinforcement learning model to investigate whether a particular type of navigation strategy arises spontaneously during spatial learning without imposing a bias onto the model.
View Article and Find Full Text PDFThe context-dependence of extinction learning has been well studied and requires the hippocampus. However, the underlying neural mechanisms are still poorly understood. Using memory-driven reinforcement learning and deep neural networks, we developed a model that learns to navigate autonomously in biologically realistic virtual reality environments based on raw camera inputs alone.
View Article and Find Full Text PDF