Polarization raises concerns for democracy and society, which have expanded in the internet era where (mis)information has become ubiquitous, its transmission faster than ever, and the freedom and means of opinion expressions are expanding. The origin of polarization however remains unclear, with multiple social and emotional factors and individual reasoning biases likely to explain its current forms. In the present work, we adopt a principled approach and show that polarization tendencies can take root in biased reward processing of new information in favour of choice confirmatory evidence.
View Article and Find Full Text PDFReinforcement learning involves updating estimates of the value of states and actions on the basis of experience. Previous work has shown that in humans, reinforcement learning exhibits a confirmatory bias: when the value of a chosen option is being updated, estimates are revised more radically following positive than negative reward prediction errors, but the converse is observed when updating the unchosen option value estimate. Here, we simulate performance on a multi-arm bandit task to examine the consequences of a confirmatory bias for reward harvesting.
View Article and Find Full Text PDFMoney is a fundamental and ubiquitous institution in modern economies. However, the question of its emergence remains a central one for economists. The monetary search-theoretic approach studies the conditions under which commodity money emerges as a solution to override frictions inherent to interindividual exchanges in a decentralized economy.
View Article and Find Full Text PDFPrevious studies suggest that factual learning, that is, learning from obtained outcomes, is biased, such that participants preferentially take into account positive, as compared to negative, prediction errors. However, whether or not the prediction error valence also affects counterfactual learning, that is, learning from forgone outcomes, is unknown. To address this question, we analysed the performance of two groups of participants on reinforcement learning tasks using a computational model that was adapted to test if prediction error valence influences learning.
View Article and Find Full Text PDF