A reinforcement learning diffusion decision model for value-based decisions.

Psychon Bull Rev

Faculty of Psychology, University of Basel, Missionsstrasse 62a, 4055, Basel, Switzerland.

Published: August 2019

AI Article Synopsis

Article Abstract

Psychological models of value-based decision-making describe how subjective values are formed and mapped to single choices. Recently, additional efforts have been made to describe the temporal dynamics of these processes by adopting sequential sampling models from the perceptual decision-making tradition, such as the diffusion decision model (DDM). These models, when applied to value-based decision-making, allow mapping of subjective values not only to choices but also to response times. However, very few attempts have been made to adapt these models to situations in which decisions are followed by rewards, thereby producing learning effects. In this study, we propose a new combined reinforcement learning diffusion decision model (RLDDM) and test it on a learning task in which pairs of options differ with respect to both value difference and overall value. We found that participants became more accurate and faster with learning, responded faster and more accurately when options had more dissimilar values, and decided faster when confronted with more attractive (i.e., overall more valuable) pairs of options. We demonstrate that the suggested RLDDM can accommodate these effects and does so better than previously proposed models. To gain a better understanding of the model dynamics, we also compare it to standard DDMs and reinforcement learning models. Our work is a step forward towards bridging the gap between two traditions of decision-making research.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6820465PMC
http://dx.doi.org/10.3758/s13423-018-1554-2DOI Listing

Publication Analysis

Top Keywords

reinforcement learning
12
diffusion decision
12
decision model
12
learning diffusion
8
value-based decision-making
8
subjective values
8
pairs options
8
models
6
learning
5
model
4

Similar Publications

Distributed representations of temporally accumulated reward prediction errors in the mouse cortex.

Sci Adv

January 2025

Lee Kong Chian School of Medicine, Nanyang Technological University, 11 Mandalay Road, Singapore 308232, Singapore.

Reward prediction errors (RPEs) quantify the difference between expected and actual rewards, serving to refine future actions. Although reinforcement learning (RL) provides ample theoretical evidence suggesting that the long-term accumulation of these error signals improves learning efficiency, it remains unclear whether the brain uses similar mechanisms. To explore this, we constructed RL-based theoretical models and used multiregional two-photon calcium imaging in the mouse dorsal cortex.

View Article and Find Full Text PDF

Treatment efficacy for patients with obsessive-compulsive disorder (OCD) with poor insight is low. Insight refers to a patient's ability to recognize that their obsessions are irrational and that their compulsions are futile attempts to reduce anxiety. This case study presents the first application of virtual reality-assisted avatar therapy for OCD (VRT-OCD) in a patient with contamination OCD and ambivalent insight.

View Article and Find Full Text PDF

Recent evidence highlights that monetary rewards can increase the precision at which healthy human volunteers can detect small changes in the intensity of thermal noxious stimuli, contradicting the idea that rewards exert a broad inhibiting influence on pain perception. This effect was stronger with contingent rewards compared with noncontingent rewards, suggesting a successful learning process. In the present study, we implemented a model comparison approach that aimed to improve our understanding of the mechanisms that underlie thermal noxious discrimination in humans.

View Article and Find Full Text PDF

Transitive inference, the ability to establish hierarchical relationships between stimuli, is typically tested by training with premise pairs (e.g., A + B-, B + C-, C + D-, D + E-), which establishes a stimulus hierarchy (A > B > C > D > E).

View Article and Find Full Text PDF

Recent research has highlighted a notable confidence bias in the haptic sense, yet its impact on learning relative to other senses remains unexplored. This online study investigated learning behaviour across visual, auditory, and haptic modalities using a probabilistic selection task on computers and mobile devices, employing dynamic and ecologically valid stimuli to enhance generalisability. We analysed reaction time as an indicator of confidence, alongside learning speed and task accuracy.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!