In reinforcement learning (RL) tasks, decision makers learn the values of actions in a context-dependent fashion. Although context dependence has many advantages, it can lead to suboptimal preferences when choice options are extrapolated beyond their original encoding contexts. Here, we tested whether we could manipulate context dependence in RL by introducing a secondary task designed to bias attention toward either absolute or relative outcomes. Participants completed a learning phase that involved choices between two (Experiment 1; = 111) or three (Experiment 2; = 90) options per trial with complete feedback. Choice options were grouped in stable contexts so that only a small set of the possible combinations were encountered. One group of participants rated how they felt about particular options (Feelings condition), and another group reported how much they expected to win from particular options (Outcomes condition) at occasional points throughout the learning phase. A third group (Control condition) made no ratings. In the subsequent transfer test, participants chose between all possible pairs of options without feedback. The experimental manipulation had no effect on learning phase performance but a significant effect on transfer, with the Feelings and Control conditions exhibiting greater context dependence than the Outcomes condition. Further, rated feelings reflected relative valuation whereas expected outcomes were more sensitive to absolute option values. Hierarchical Bayesian modeling was used to summarize the findings from both experiments. Our results suggest that attending to affective reactions versus expected outcomes moderates the effects of encoding context on subsequent choices. (PsycInfo Database Record (c) 2023 APA, all rights reserved).
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1037/xlm0001145 | DOI Listing |
Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!