The spread of low-credibility content by social bots.

Nat Commun

School of Informatics, Computing, and Engineering, Indiana University Bloomington, Bloomington, 47408, IN, USA.

Published: November 2018

The massive spread of digital misinformation has been identified as a major threat to democracies. Communication, cognitive, social, and computer scientists are studying the complex causes for the viral diffusion of misinformation, while online platforms are beginning to deploy countermeasures. Little systematic, data-based evidence has been published to guide these efforts. Here we analyze 14 million messages spreading 400 thousand articles on Twitter during ten months in 2016 and 2017. We find evidence that social bots played a disproportionate role in spreading articles from low-credibility sources. Bots amplify such content in the early spreading moments, before an article goes viral. They also target users with many followers through replies and mentions. Humans are vulnerable to this manipulation, resharing content posted by bots. Successful low-credibility sources are heavily supported by social bots. These results suggest that curbing social bots may be an effective strategy for mitigating the spread of online misinformation.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6246561PMC
http://dx.doi.org/10.1038/s41467-018-06930-7DOI Listing

Publication Analysis

Top Keywords

social bots
16
low-credibility sources
8
bots
6
social
5
spread low-credibility
4
low-credibility content
4
content social
4
bots massive
4
massive spread
4
spread digital
4

Similar Publications

Unraveling the Use of Disinformation Hashtags by Social Bots During the COVID-19 Pandemic: Social Networks Analysis.

JMIR Infodemiology

January 2025

Computational Social Science DataLab, University Institute of Research for Sustainable Social Development (INDESS), University of Cadiz, Jerez de la Frontera, Spain.

Background: During the COVID-19 pandemic, social media platforms have been a venue for the exchange of messages, including those related to fake news. There are also accounts programmed to disseminate and amplify specific messages, which can affect individual decision-making and present new challenges for public health.

Objective: This study aimed to analyze how social bots use hashtags compared to human users on topics related to misinformation during the outbreak of the COVID-19 pandemic.

View Article and Find Full Text PDF

A graph neural architecture search approach for identifying bots in social media.

Front Artif Intell

December 2024

Decision Support Systems Laboratory, School of Electrical and Computer Engineering, National Technical University of Athens, Athens, Greece.

Social media platforms, including X, Facebook, and Instagram, host millions of daily users, giving rise to bots automated programs disseminating misinformation and ideologies with tangible real-world consequences. While bot detection in platform X has been the area of many deep learning models with adequate results, most approaches neglect the graph structure of social media relationships and often rely on hand-engineered architectures. Our work introduces the implementation of a Neural Architecture Search (NAS) technique, namely Deep and Flexible Graph Neural Architecture Search (DFG-NAS), tailored to Relational Graph Convolutional Neural Networks (RGCNs) in the task of bot detection in platform X.

View Article and Find Full Text PDF

Artificial Intelligence (AI) chatbots, such as ChatGPT, have been shown to mimic individual human behaviour in a wide range of psychological and economic tasks. Do groups of AI chatbots also mimic collective behaviour? If so, artificial societies of AI chatbots may aid social scientific research by simulating human collectives. To investigate this theoretical possibility, we focus on whether AI chatbots natively mimic one commonly observed collective behaviour: homophily, people's tendency to form communities with similar others.

View Article and Find Full Text PDF

The study examines different graph-based methods of detecting anomalous activities on digital markets, proposing the most efficient way to increase market actors' protection and reduce information asymmetry. Anomalies are defined below as both bots and fraudulent users (who can be both bots and real people). Methods are compared against each other, and state-of-the-art results from the literature and a new algorithm is proposed.

View Article and Find Full Text PDF

The proliferation of AI-powered bots and sophisticated fraudsters poses a significant threat to the integrity of scientific studies reliant on online surveys across diverse disciplines, including health, social, environmental and political sciences. We found a substantial decline in usable responses from online surveys from 75 to 10% in recent years due to survey fraud. Monetary incentives attract sophisticated fraudsters capable of mimicking genuine open-ended responses and verifying information submitted months prior, showcasing the advanced capabilities of online survey fraud today.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!