Bandura argues that individuals are more likely to engage in social learning when they identify with a social model and when they are motivated or rewarded. Therefore, in the present work, we investigate how these two key factors, perceived similarity and affiliative motivation, influence the extent to which individuals engage in social tuning or align their views with an interaction partner-especially if their partner's attitudes differ from the larger social group. Experiment 1 (170 participants) explored the role of perceived similarity through group membership when needing to work collaboratively with a collaboration partner whose climate change beliefs differed from a larger social group. Experiment 2 (115 participants) directly manipulated affiliative motivation (i.e., length of interaction time) along with perceived similarity (i.e., Greek Life membership) to explore if these factors influenced social tuning of drinking attitudes and behaviors. Experiments 3 (69 participants) and 4 (93 participants) replicated Experiment 2 and examined whether tuning occurred for explicit and implicit attitudes towards weight (negative views Experiment 3 and positive views Experiment 4). Results indicate that when individuals experience high affiliative motivation, they are more likely to engage in social tuning of explicit and implicit attitudes when their interaction partner belongs to their ingroup rather than their outgroup. These findings are consistent with the tenets of Social Learning Theory, Shared Reality Theory, and the affiliative social tuning hypothesis.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC10470833PMC
http://dx.doi.org/10.3389/fpsyg.2023.1060166DOI Listing

Publication Analysis

Top Keywords

social tuning
20
affiliative motivation
16
engage social
12
perceived similarity
12
social
10
individuals engage
8
social learning
8
larger social
8
social group
8
group experiment
8

Similar Publications

The way media portray public health problems influences the public's perception of problems and related solutions. Social media allows users to engage with news and to collectively construct meaning. This paper examined news in comparison to user-generated content related to opioids to understand the role of second-level agenda-setting in public health.

View Article and Find Full Text PDF

Background: The implementation of large language models (LLMs), such as BART (Bidirectional and Auto-Regressive Transformers) and GPT-4, has revolutionized the extraction of insights from unstructured text. These advancements have expanded into health care, allowing analysis of social media for public health insights. However, the detection of drug discontinuation events (DDEs) remains underexplored.

View Article and Find Full Text PDF

Do machines and humans process language in similar ways? Recent research has hinted at the affirmative, showing that human neural activity can be effectively predicted using the internal representations of language models (LMs). Although such results are thought to reflect shared computational principles between LMs and human brains, there are also clear differences in how LMs and humans represent and use language. In this work, we systematically explore the divergences between human and machine language processing by examining the differences between LM representations and human brain responses to language as measured by Magnetoencephalography (MEG) across two datasets in which subjects read and listened to narrative stories.

View Article and Find Full Text PDF

Scalable information extraction from free text electronic health records using large language models.

BMC Med Res Methodol

January 2025

Division of Pharmacoepidemiology and Pharmacoeconomics, Department of Medicine, Brigham and Women's Hospital, Harvard Medical School, 1620 Tremont Street, Suite 3030-R, Boston, MA, 02120, USA.

Article Synopsis
  • A study examined the efficacy of open-source large language models (LLMs) in extracting social determinants of health (SDoH) data from free-text clinical notes in electronic health records (EHRs).
  • The methodology involved analyzing free-text notes from a sample of 200 patients, applying LLMs and comparing their performance with a baseline pattern-matching model, achieving high inter-rater reliability among human reviewers.
  • Results showed LLMs significantly outperformed traditional methods, with one model achieving up to 40% higher accuracy, indicating that LLMs could enhance research and address health disparities if further refined and trained for specific applications.
View Article and Find Full Text PDF

A stimulus with light is clearly visual; a stimulus with sound is clearly auditory. But what makes a stimulus "social", and how do judgments of socialness differ across people? Here, we characterize both group-level and individual thresholds for perceiving the presence and nature of a social interaction. We take advantage of the fact that humans are primed to see social interactions-e.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!