In the present work, we set out to assess whether and how much people learn in response to their stereotypic assumptions being confirmed, being disconfirmed, or remaining untested. In Study 1, participants made a series of judgments that could be influenced by stereotypes and received feedback that confirmed stereotypes the majority of the time, feedback that disconfirmed stereotypes the majority of the time, or no feedback on their judgments. Replicating past work on confirmation bias, patterns in the conditions with feedback indicated that pieces of stereotype-confirming evidence exerted more influence than stereotype-disconfirming evidence. Participants in the Stereotype-Confirming condition stereotyped more over time, but rates of stereotyping for participants in the Stereotype-Disconfirming condition remained unchanged. Participants who received no feedback, and thus no evidence, stereotyped more over time, indicating that, matching our core hypothesis, they learned from their own untested assumptions. Study 2 provided a direct replication of Study 1. In Study 3, we extended our assessment to memory. Participants made judgments and received a mix of confirmatory, disconfirmatory, and no feedback and were subsequently asked to remember the feedback they received on each trial, if any. Memory tests for the no feedback trials revealed that participants often misremembered that their untested assumptions were confirmed. Supplementing null hypothesis significance testing, Bayes Factor analyses indicated the data in Studies 1, 2, and 3 provided moderate-to-extreme evidence in favor of our hypotheses. Discussion focuses on the challenges these learning patterns create for efforts to reduce stereotyping.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC9337700PMC
http://dx.doi.org/10.1016/j.jesp.2022.104380DOI Listing

Publication Analysis

Top Keywords

untested assumptions
12
assumptions confirmed
8
feedback
8
received feedback
8
stereotypes majority
8
majority time
8
time feedback
8
stereotyped time
8
participants
6
evidence
5

Similar Publications

Measurement of object recognition (OR) ability could predict learning and success in real-world settings, and there is hope that it may reduce bias often observed in cognitive tests. Although the measurement of visual OR is not expected to be influenced by the language of participants or the language of instructions, these assumptions remain largely untested. Here, we address the challenges of measuring OR abilities across linguistically diverse populations.

View Article and Find Full Text PDF

Many trials are designed to collect outcomes at or around pre-specified times after randomization. If there is variability in the times when participants are actually assessed, this can pose a challenge to learning the effect of treatment, since not all participants have outcome assessments at the times of interest. Furthermore, observed outcome values may not be representative of all participants' outcomes at a given time.

View Article and Find Full Text PDF

Positive Unlabeled Learning Selected Not At Random (PULSNAR): class proportion estimation without the selected completely at random assumption.

PeerJ Comput Sci

November 2024

Department of Internal Medicine, Division of Translational Informatics, University of New Mexico, Albuquerque, United States.

Positive and unlabeled (PU) learning is a type of semi-supervised binary classification where the machine learning algorithm differentiates between a set of positive instances (labeled) and a set of both positive and negative instances (unlabeled). PU learning has broad applications in settings where confirmed negatives are unavailable or difficult to obtain, and there is value in discovering positives among the unlabeled (., viable drugs among untested compounds).

View Article and Find Full Text PDF

We consider estimation of measures of model performance in a target population when covariate and outcome data are available from a source population and covariate data, but not outcome data, are available from the target population. In this setting, identification of measures of model performance is possible under an untestable assumption that the outcome and population (source or target) are independent conditional on covariates. In practice, this assumption is uncertain and, in some cases, controversial.

View Article and Find Full Text PDF

AbstractMany parasite species use multiple host species to complete development; however, empirical tests of models that seek to understand factors impacting evolutionary changes or maintenance of host number in parasite life cycles are scarce. Specifically, one model incorporating parasite mating systems that posits that multihost life cycles are an adaptation to prevent inbreeding in hermaphroditic parasites and thus preclude inbreeding depression remains untested. The model assumes that loss of a host results in parasite inbreeding and predicts that host loss can evolve only if there is no parasite inbreeding depression.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!