Social bots, employed to manipulate public opinion, pose a novel threat to digital societies. Existing bot research has emphasized technological aspects while neglecting psychological factors shaping human-bot interactions. This research addresses this gap within the context of the US-American electorate. Two datasets provide evidence that partisanship distorts (a) online users' representation of bots, (b) their ability to identify them, and (c) their intentions to interact with them. Study 1 explores global bot perceptions on through survey data from N = 452 Twitter (now X) users. Results suggest that users tend to attribute bot-related dangers to political adversaries, rather than recognizing bots as a shared threat to political discourse. Study 2 (N = 619) evaluates the consequences of such misrepresentations for the quality of online interactions. In an online experiment, participants were asked to differentiate between human and bot profiles. Results indicate that partisan leanings explained systematic judgement errors. The same data suggest that participants aim to avoid interacting with bots. However, biased judgements may undermine this motivation in praxis. In sum, the presented findings underscore the importance of interdisciplinary strategies that consider technological and human factors to address the threats posed by bots in a rapidly evolving digital landscape.

Download full-text PDF

Source
http://dx.doi.org/10.1111/bjso.12794DOI Listing

Publication Analysis

Top Keywords

bots
6
kind crowd!
4
crowd! partisan
4
partisan bias
4
bias distorts
4
distorts perceptions
4
perceptions political
4
political bots
4
bots twitter
4
twitter social
4

Similar Publications

Unraveling the Use of Disinformation Hashtags by Social Bots During the COVID-19 Pandemic: Social Networks Analysis.

JMIR Infodemiology

January 2025

Computational Social Science DataLab, University Institute of Research for Sustainable Social Development (INDESS), University of Cadiz, Jerez de la Frontera, Spain.

Background: During the COVID-19 pandemic, social media platforms have been a venue for the exchange of messages, including those related to fake news. There are also accounts programmed to disseminate and amplify specific messages, which can affect individual decision-making and present new challenges for public health.

Objective: This study aimed to analyze how social bots use hashtags compared to human users on topics related to misinformation during the outbreak of the COVID-19 pandemic.

View Article and Find Full Text PDF

Because of their proficiency in capturing the category characteristics of graphs, graph neural networks have shown remarkable advantages for graph-level classification tasks, that is, rumor detection and anomaly detection. Due to the manipulation of special means (e.g.

View Article and Find Full Text PDF

A graph neural architecture search approach for identifying bots in social media.

Front Artif Intell

December 2024

Decision Support Systems Laboratory, School of Electrical and Computer Engineering, National Technical University of Athens, Athens, Greece.

Social media platforms, including X, Facebook, and Instagram, host millions of daily users, giving rise to bots automated programs disseminating misinformation and ideologies with tangible real-world consequences. While bot detection in platform X has been the area of many deep learning models with adequate results, most approaches neglect the graph structure of social media relationships and often rely on hand-engineered architectures. Our work introduces the implementation of a Neural Architecture Search (NAS) technique, namely Deep and Flexible Graph Neural Architecture Search (DFG-NAS), tailored to Relational Graph Convolutional Neural Networks (RGCNs) in the task of bot detection in platform X.

View Article and Find Full Text PDF

Artificial Intelligence (AI) chatbots, such as ChatGPT, have been shown to mimic individual human behaviour in a wide range of psychological and economic tasks. Do groups of AI chatbots also mimic collective behaviour? If so, artificial societies of AI chatbots may aid social scientific research by simulating human collectives. To investigate this theoretical possibility, we focus on whether AI chatbots natively mimic one commonly observed collective behaviour: homophily, people's tendency to form communities with similar others.

View Article and Find Full Text PDF

Background: Borderline ovarian tumors (BOTs) comprise 15%-20% of all ovarian epithelial malignancies. The majority of them are serous tumors followed by mucinous tumors. Pre-operative cytological diagnosis plays an important role with histopathology being the gold standard.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!