In two experiments, we used the simple zero-sum game Rock, Paper and Scissors to study the common reinforcement-based rules of repeating choices after winning (win-stay) and shifting from previous choice options after losing (lose-shift). Participants played the game against both computer opponents who could not be exploited and computer opponents who could be exploited by making choices that would at times conflict with reinforcement. Against unexploitable opponents, participants achieved an approximation of random behavior, contrary to previous research commonly finding reinforcement biases. Against exploitable opponents, the participants learned to exploit the opponent regardless of whether optimal choices conflicted with reinforcement or not. The data suggest that learning a rule that allows one to exploit was largely determined by the outcome of the previous trial.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC8809577PMC
http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0262249PLOS

Publication Analysis

Top Keywords

unexploitable opponents
8
computer opponents
8
opponents exploited
8
opponents participants
8
opponents
5
breaking bonds
4
reinforcement
4
bonds reinforcement
4
reinforcement effects
4
effects trial
4

Similar Publications

In two experiments, we used the simple zero-sum game Rock, Paper and Scissors to study the common reinforcement-based rules of repeating choices after winning (win-stay) and shifting from previous choice options after losing (lose-shift). Participants played the game against both computer opponents who could not be exploited and computer opponents who could be exploited by making choices that would at times conflict with reinforcement. Against unexploitable opponents, participants achieved an approximation of random behavior, contrary to previous research commonly finding reinforcement biases.

View Article and Find Full Text PDF

We explored the possibility that in order for longer-form expressions of reinforcement learning (win-calmness, loss-restlessness) to manifest across tasks, they must first develop because of micro-transactions within tasks. We found no evidence of win-calmness or loss-restlessness when wins could not be maximised (unexploitable opponents), nor when the threat of win minimisation was presented (exploiting opponents), but evidence of win-calmness (but not loss-restlessness) when wins could be maximised (exploitable opponents).

View Article and Find Full Text PDF

Variability in competitive decision-making speed and quality against exploiting and exploitative opponents.

Sci Rep

February 2021

Department of Psychology, University of Alberta, P-217 Biological Sciences Building, Edmonton, AB, T6G 2E9, Canada.

A presumption in previous work has been that sub-optimality in competitive performance following loss is the result of a reduction in decision-making time (i.e., post-error speeding).

View Article and Find Full Text PDF

To understand the boundaries we set for ourselves in terms of environmental responsibility during competition, we examined a neural index of outcome valence (feedback-related negativity; FRN) in relation to an early index of visual attention (N1), a later index of motivational significance (P3), and, eventual behaviour. In Experiment 1 (n = 36), participants either were (play) or were not (observe) responsible for action selection. In Experiment 2 (n = 36), opponents additionally either could (exploitable) or could not (unexploitable) be beaten.

View Article and Find Full Text PDF

Verbruggen, Chambers, Lawrence, and McLaren (2017) recently challenged the view that individuals act with greater caution following the experience of a negative outcome by showing that a gambled loss resulted in faster reaction time (RT) on the next trial. Over three experiments, we replicate and establish the boundary conditions of this effect in the context of a simple game (rock, paper, scissors [RPS]). Choice responding against unexploitable opponents replicated the link between failure and faster responding.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!