How social reinforcement learning can lead to metastable polarisation and the voter model.

PLoS One

Department of Applied Mathematics, University of Twente, Enschede, The Netherlands.

Published: December 2024

Previous explanations for the persistence of polarization of opinions have typically included modelling assumptions that predispose the possibility of polarization (i.e., assumptions allowing a pair of agents to drift apart in their opinion such as repulsive interactions or bounded confidence). An exception is a recent simulation study showing that polarization is persistent when agents form their opinions using social reinforcement learning. Our goal is to highlight the usefulness of reinforcement learning in the context of modeling opinion dynamics, but that caution is required when selecting the tools used to study such a model. We show that the polarization observed in the model of the simulation study cannot persist indefinitely, and exhibits consensus asymptotically with probability one. By constructing a link between the reinforcement learning model and the voter model, we argue that the observed polarization is metastable. Finally, we show that a slight modification in the learning process of the agents changes the model from being non-ergodic to being ergodic. Our results show that reinforcement learning may be a powerful method for modelling polarization in opinion dynamics, but that the tools (objects to study such as the stationary distribution, or time to absorption for example) appropriate for analysing such models crucially depend on their properties (such as ergodicity, or transience). These properties are determined by the details of the learning process and may be difficult to identify based solely on simulations.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC11651571PMC
http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0313951PLOS

Publication Analysis

Top Keywords

reinforcement learning
20
social reinforcement
8
voter model
8
simulation study
8
opinion dynamics
8
learning process
8
learning
7
model
6
polarization
6
learning lead
4

Similar Publications

Introduction: The COVID-19 (COronaVIrus Disease-2019) pandemic highlighted the importance of assessing the rationales behind vaccine hesitancy for the containment of pandemics. In this nationwide study, representative of the Luxembourgish population, we identified hesitant groups from adolescence to late adulthood and explored motivations both for and against vaccination.

Methods: We combined data collected via online surveys for the CON-VINCE (COvid-19 National survey for assessing VIral spread by Non-affected CarriErs) study, 1865 respondents aged 18-84, and for the YAC (Young people And Covid-19) study, 3740 respondents aged 12-29.

View Article and Find Full Text PDF

Active recall, the act of recalling knowledge from memory, and games-based learning, the use of games and game elements for learning, are well-established as effective strategies for learning gross anatomy. An activity that applies both principles is Catch-Phrase, a fast-paced word guessing game. In Anatomy Catch-Phrase, players must get their teammates to identify an anatomical term by describing its features, functions, or relationships without saying the term itself.

View Article and Find Full Text PDF

Background: Domestic violence and abuse (DVA) is a violation of human rights that damages the health and well-being of-gay, bisexual and other men who have sex with men (gbMSM). Sexual health services provide a unique opportunity to assess for DVA and provide support. This study explores the feasibility and acceptability of Healthcare Responding to Men for Safety (HERMES), a pilot intervention aimed to improve the identification and referral of gbMSM experiencing DVA in a London NHS Trust.

View Article and Find Full Text PDF

Schemas, reinforcement learning and the medial prefrontal cortex.

Nat Rev Neurosci

January 2025

Princeton Neuroscience Institute, Princeton University, Princeton, NJ, USA.

Schemas are rich and complex knowledge structures about the typical unfolding of events in a context; for example, a schema of a dinner at a restaurant. In this Perspective, we suggest that reinforcement learning (RL), a computational theory of learning the structure of the world and relevant goal-oriented behaviour, underlies schema learning. We synthesize literature about schemas and RL to offer that three RL principles might govern the learning of schemas: learning via prediction errors, constructing hierarchical knowledge using hierarchical RL, and dimensionality reduction through learning a simplified and abstract representation of the world.

View Article and Find Full Text PDF

The effectiveness of using vegetation to reinforce slopes is influenced by the soil and vegetation characteristics. Hence, this study pioneers the construction of an extensive soil database using random forest machine learning and ordinary kriging methods, focusing on the influence of plant roots on the saturated and unsaturated properties of residual soils. Soil organic content, which includes contributions from both soil organisms and roots, functions as a key factor in estimating soil hydraulic and mechanical properties influenced by vegetation roots.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!