Reinforcement learning models make use of reward prediction errors (RPEs), the difference between an expected and obtained reward. There is evidence that the brain computes RPEs, but an outstanding question is whether positive RPEs ("better than expected") and negative RPEs ("worse than expected") are represented in a single integrated system. An electrophysiological component, feedback related negativity, has been claimed to encode an RPE but its relative sensitivity to the utility of positive and negative RPEs remains unclear. This study explored the question by varying the utility of positive and negative RPEs in a design that controlled for other closely related properties of feedback and could distinguish utility from salience. It revealed a mediofrontal sensitivity to utility, for positive RPEs at 275-310ms and for negative RPEs at 310-390ms. These effects were preceded and succeeded by a response consistent with an unsigned prediction error, or "salience" coding.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1016/j.neuropsychologia.2014.06.004 | DOI Listing |
Cureus
October 2024
Department of Preventive Dentistry, College of Medicine and Dentistry, Riyadh Elm University (REU), Riyadh, SAU.
Background Orthodontic expansion using a rapid palatal expander (RPE), initiated early in life, is one approach to treating malocclusions. However, prolonged RPE use leads to negative consequences. The study aims to determine the perception and experience of orthodontists related to the prolonged use of RPE and their management.
View Article and Find Full Text PDFbioRxiv
May 2024
Nash Family Department of Neuroscience and the Friedman Brain Institute, Icahn School of Medicine at Mount Sinai, New York, NY, USA.
Dopamine (DA) signals originating from substantia nigra (SN) neurons are centrally involved in the regulation of motor and reward processing. DA signals behaviorally relevant events where reward outcomes differ from expectations (reward prediction errors, RPEs). RPEs play a crucial role in learning optimal courses of action and in determining response vigor when an agent expects rewards.
View Article and Find Full Text PDFNeuron
March 2024
Department of Molecular and Cellular Biology, Center for Brain Science, Harvard University, Cambridge, MA 02138, USA. Electronic address:
Midbrain dopamine neurons are thought to signal reward prediction errors (RPEs), but the mechanisms underlying RPE computation, particularly the contributions of different neurotransmitters, remain poorly understood. Here, we used a genetically encoded glutamate sensor to examine the pattern of glutamate inputs to dopamine neurons in mice. We found that glutamate inputs exhibit virtually all of the characteristics of RPE rather than conveying a specific component of RPE computation, such as reward or expectation.
View Article and Find Full Text PDFNat Commun
December 2023
Helen Wills Neuroscience Institute, University of California, Berkeley, Berkeley, CA, USA.
The signed value and unsigned salience of reward prediction errors (RPEs) are critical to understanding reinforcement learning (RL) and cognitive control. Dorsomedial prefrontal cortex (dMPFC) and insula (INS) are key regions for integrating reward and surprise information, but conflicting evidence for both signed and unsigned activity has led to multiple proposals for the nature of RPE representations in these brain areas. Recently developed RL models allow neurons to respond differently to positive and negative RPEs.
View Article and Find Full Text PDFBoth the midbrain systems, encompassing the ventral striatum (VS), and the cortical systems, including the dorsal anterior cingulate cortex (dACC), play roles in reinforcing and enhancing learning. However, the specific contributions of signals from these regions in learning remains unclear. To investigate this, we examined how VS and dACC are involved in visual perceptual learning (VPL) through an orientation discrimination task.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!