Publications by authors named "Anne Collins"

Computational modeling has revealed that human research participants use both rapid working memory (WM) and incremental reinforcement learning (RL) (RL+WM) to solve a simple instrumental learning task, relying on WM when the number of stimuli is small and supplementing with RL when the number of stimuli exceeds WM capacity. Inspired by this work, we examined which learning systems and strategies are used by adolescent and adult mice when they first acquire a conditional associative learning task. In a version of the human RL+WM task translated for rodents, mice were required to associate odor stimuli (from a set of 2 or 4 odors) with a left or right port to receive reward.

View Article and Find Full Text PDF

Learning structures that effectively abstract decision policies is key to the flexibility of human intelligence. Previous work has shown that humans use hierarchically structured policies to efficiently navigate complex and dynamic environments. However, the computational processes that support the learning and construction of such policies remain insufficiently understood.

View Article and Find Full Text PDF

Motor learning is often viewed as a unitary process that operates outside of conscious awareness. This perspective has led to the development of sophisticated models designed to elucidate the mechanisms of implicit sensorimotor learning. In this review, we argue for a broader perspective, emphasizing the contribution of explicit strategies to sensorimotor learning tasks.

View Article and Find Full Text PDF

Importance: Observational data have shown that postdiagnosis exercise is associated with reduced risk of prostate cancer death. The feasibility and tumor biological activity of exercise therapy is not known.

Objective: To identify recommended phase 2 dose of exercise therapy for patients with prostate cancer.

View Article and Find Full Text PDF

Computational cognitive models have been used extensively to formalize cognitive processes. Model parameters offer a simple way to quantify individual differences in how humans process information. Similarly, model comparison allows researchers to identify which theories, embedded in different models, provide the best accounts of the data.

View Article and Find Full Text PDF

Computational cognitive modeling is an important tool for understanding the processes supporting human and animal decision-making. Choice data in decision-making tasks are inherently noisy, and separating noise from signal can improve the quality of computational modeling. Common approaches to model decision noise often assume constant levels of noise or exploration throughout learning (e.

View Article and Find Full Text PDF

Dopamine release in the nucleus accumbens has been hypothesized to signal reward prediction error, the difference between observed and predicted reward, suggesting a biological implementation for reinforcement learning. Rigorous tests of this hypothesis require assumptions about how the brain maps sensory signals to reward predictions, yet this mapping is still poorly understood. In particular, the mapping is non-trivial when sensory signals provide ambiguous information about the hidden state of the environment.

View Article and Find Full Text PDF

Computational cognitive models have been used extensively to formalize cognitive processes. Model parameters offer a simple way to quantify individual differences in how humans process information. Similarly, model comparison allows researchers to identify which theories, embedded in different models, provide the best accounts of the data.

View Article and Find Full Text PDF

Goals play a central role in human cognition. However, computational theories of learning and decision-making often take goals as given. Here, we review key empirical findings showing that goals shape the representations of inputs, responses, and outcomes, such that setting a goal crucially influences the central aspects of any learning process: states, actions, and rewards.

View Article and Find Full Text PDF

How does the similarity between stimuli affect our ability to learn appropriate response associations for them? In typical laboratory experiments learning is investigated under somewhat ideal circumstances, where stimuli are easily discriminable. This is not representative of most real-life learning, where overlapping "stimuli" can result in different "rewards" and may be learned simultaneously (e.g.

View Article and Find Full Text PDF

When observing the outcome of a choice, people are sensitive to the choice's context, such that the experienced value of an option depends on the alternatives: getting $1 when the possibilities were 0 or 1 feels much better than when the possibilities were 1 or 10. Context-sensitive valuation has been documented within reinforcement learning (RL) tasks, in which values are learned from experience through trial and error. Range adaptation, wherein options are rescaled according to the range of values yielded by available options, has been proposed to account for this phenomenon.

View Article and Find Full Text PDF

The ability to use past experience to effectively guide decision-making declines in older adulthood. Such declines have been theorized to emerge from either impairments of striatal reinforcement learning systems (RL) or impairments of recurrent networks in prefrontal and parietal cortex that support working memory (WM). Distinguishing between these hypotheses has been challenging because either RL or WM could be used to facilitate successful decision-making in typical laboratory tasks.

View Article and Find Full Text PDF

Using Bayesian methods to apply computational models of cognitive processes, or , is an important new trend in psychological research. The rise of Bayesian cognitive modeling has been accelerated by the introduction of software that efficiently automates the Markov chain Monte Carlo sampling used for Bayesian model fitting-including the popular Stan and PyMC packages, which automate the dynamic Hamiltonian Monte Carlo and No-U-Turn Sampler (HMC/NUTS) algorithms that we spotlight here. Unfortunately, Bayesian cognitive models can struggle to pass the growing number of diagnostic checks required of Bayesian models.

View Article and Find Full Text PDF
Article Synopsis
  • Human learning involves both reinforcement learning (RL) and working memory (WM) systems that interact in complex ways, presenting a trade-off where high WM load can slow down learning but improve retention of information.
  • Studies conducted with EEG showed that while a higher WM load slowed down the ability to learn, it ultimately led to stronger reinforcement signals that enhanced future retention of learned behaviors.
  • Induced stress was found to have a limited effect on the ability to switch between focusing on immediate learning and long-term retention, highlighting the intricate relationship between WM and RL systems in effective learning processes.
View Article and Find Full Text PDF

Humans have the exceptional ability to efficiently structure past knowledge during learning to enable fast generalization. Xia and Collins (2021) evaluated this ability in a hierarchically structured, sequential decision-making task, where participants could build "options" (strategy "chunks") at multiple levels of temporal and state abstraction. A quantitative model, the Option Model, captured the transfer effects observed in human participants, suggesting that humans create and compose hierarchical options and use them to explore novel contexts.

View Article and Find Full Text PDF

In reinforcement learning (RL) experiments, participants learn to make rewarding choices in response to different stimuli; RL models use outcomes to estimate stimulus-response values that change incrementally. RL models consider any response type indiscriminately, ranging from more concretely defined motor choices (pressing a key with the index finger), to more general choices that can be executed in a number of ways (selecting dinner at the restaurant). However, does the learning process vary as a function of the choice type? In Experiment 1, we show that it does: Participants were slower and less accurate in learning correct choices of a general format compared with learning more concrete motor actions.

View Article and Find Full Text PDF

Reinforcement Learning (RL) models have revolutionized the cognitive and brain sciences, promising to explain behavior from simple conditioning to complex problem solving, to shed light on developmental and individual differences, and to anchor cognitive processes in specific brain mechanisms. However, the RL literature increasingly reveals contradictory results, which might cast doubt on these claims. We hypothesized that many contradictions arise from two commonly-held assumptions about computational model parameters that are actually often invalid: That parameters between contexts (e.

View Article and Find Full Text PDF

The dorsomedial striatum (DMS) plays a key role in action selection, but less is known about how direct and indirect pathway spiny projection neurons (dSPNs and iSPNs, respectively) contribute to choice rejection in freely moving animals. Here, we use pathway-specific chemogenetic manipulation during a serial choice foraging task to test the role of dSPNs and iSPNs in learned choice rejection. We find that chemogenetic activation, but not inhibition, of iSPNs disrupts rejection of nonrewarded choices, contrary to predictions of a simple "select/suppress" heuristic.

View Article and Find Full Text PDF

During adolescence, youth venture out, explore the wider world, and are challenged to learn how to navigate novel and uncertain environments. We investigated how performance changes across adolescent development in a stochastic, volatile reversal-learning task that uniquely taxes the balance of persistence and flexibility. In a sample of 291 participants aged 8-30, we found that in the mid-teen years, adolescents outperformed both younger and older participants.

View Article and Find Full Text PDF
Article Synopsis
  • Impulsivity is a tendency to make hasty decisions without careful thought, which affects decision-making, particularly in reward-based contexts.
  • Research aimed to understand how impulsivity impacts performance in a reward-driven learning task but did not find expected results; instead, it showed nuanced effects on switching behavior after losses.
  • The study suggests that impulsivity's impact on learning may involve more intricate strategies than current computational models can explain, highlighting a need for further investigation in this area.
View Article and Find Full Text PDF

We encounter the world as a continuous flow and effortlessly segment sequences of events into episodes. This process of event segmentation engages working memory (WM) for tracking the flow of events and impacts subsequent memory accuracy. WM is limited in how much information (i.

View Article and Find Full Text PDF

Reinforcement learning (RL) models have advanced our understanding of how animals learn and make decisions, and how the brain supports some aspects of learning. However, the neural computations that are explained by RL algorithms fall short of explaining many sophisticated aspects of human decision making, including the generalization of learned information, one-shot learning, and the synthesis of task information in complex environments..

View Article and Find Full Text PDF

Reinforcement learning (RL) is a concept that has been invaluable to fields including machine learning, neuroscience, and cognitive science. However, what RL entails differs between fields, leading to difficulties when interpreting and translating findings. After laying out these differences, this paper focuses on cognitive (neuro)science to discuss how we as a field might over-interpret RL modeling results.

View Article and Find Full Text PDF

Humans have the astonishing capacity to quickly adapt to varying environmental demands and reach complex goals in the absence of extrinsic rewards. Part of what underlies this capacity is the ability to flexibly reuse and recombine previous experiences, and to plan future courses of action in a psychological space that is shaped by these experiences. Decades of research have suggested that humans use hierarchical representations for efficient planning and flexibility, but the origin of these representations has remained elusive.

View Article and Find Full Text PDF
Article Synopsis
  • Reinforcement learning and working memory are interconnected processes in human cognition, with significant overlap in brain networks, challenging the idea that they are completely distinct.
  • Recent studies show that examining one process can enhance understanding of the other, suggesting that both cognitive and computational sciences can benefit from this integrated approach.
  • Future research should focus on the relationship between these processes, as this understanding is crucial for developing artificial agents that learn more efficiently like humans and for comprehensively studying individual differences in behavior.
View Article and Find Full Text PDF