Social bots have already infiltrated social media platforms, such as Twitter, Facebook, and so on. Exploring the role of social bots in discussions of the COVID-19 pandemic, as well as comparing the behavioral differences between social bots and humans, is an important foundation for studying public health opinion dissemination. We collected data on Twitter and used Botometer to classify users into social bots and humans. Machine learning methods were used to analyze the characteristics of topic semantics, sentiment attributes, dissemination intentions, and interaction patterns of humans and social bots. The results show that 22% of these accounts were social bots, while 78% were humans, and there are significant differences in the behavioral characteristics between them. Social bots are more concerned with the topics of public health news than humans are with individual health and daily lives. More than 85% of bots' tweets are liked, and they have a large number of followers and friends, which means they have influence on internet users' perceptions about disease transmission and public health. In addition, social bots, located mainly in Europe and America countries, create an "authoritative" image by posting a lot of news, which in turn gains more attention and has a significant effect on humans. The findings contribute to understanding the behavioral patterns of new technologies such as social bots and their role in the dissemination of public health information.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC9967279 | PMC |
http://dx.doi.org/10.3390/ijerph20043284 | DOI Listing |
J R Soc Interface
January 2025
Interdisciplinary Graduate School of Engineering Sciences, Kyushu University, Fukuoka 816-8580, Japan.
The positive impact of cooperative bots on cooperation within evolutionary game theory is well-documented. However, prior studies predominantly use discrete strategic frameworks with deterministic actions. This article explores continuous and mixed strategic approaches.
View Article and Find Full Text PDFJ Med Internet Res
January 2025
Tobacco Settlement Endowment Trust Health Promotion Research Center, Stephenson Cancer Center, University of Oklahoma Health Sciences, Oklahoma City, OK, United States.
Background: Social behavioral research studies have increasingly shifted to remote recruitment and enrollment procedures. This shifting landscape necessitates evolving best practices to help mitigate the negative impacts of deceptive attempts (eg, fake profiles and bots) at enrolling in behavioral research.
Objective: This study aimed to develop and implement robust deception detection procedures during the enrollment period of a remotely conducted randomized controlled trial.
JMIR Infodemiology
January 2025
Computational Social Science DataLab, University Institute of Research for Sustainable Social Development (INDESS), University of Cadiz, Jerez de la Frontera, Spain.
Background: During the COVID-19 pandemic, social media platforms have been a venue for the exchange of messages, including those related to fake news. There are also accounts programmed to disseminate and amplify specific messages, which can affect individual decision-making and present new challenges for public health.
Objective: This study aimed to analyze how social bots use hashtags compared to human users on topics related to misinformation during the outbreak of the COVID-19 pandemic.
Front Artif Intell
December 2024
Decision Support Systems Laboratory, School of Electrical and Computer Engineering, National Technical University of Athens, Athens, Greece.
Social media platforms, including X, Facebook, and Instagram, host millions of daily users, giving rise to bots automated programs disseminating misinformation and ideologies with tangible real-world consequences. While bot detection in platform X has been the area of many deep learning models with adequate results, most approaches neglect the graph structure of social media relationships and often rely on hand-engineered architectures. Our work introduces the implementation of a Neural Architecture Search (NAS) technique, namely Deep and Flexible Graph Neural Architecture Search (DFG-NAS), tailored to Relational Graph Convolutional Neural Networks (RGCNs) in the task of bot detection in platform X.
View Article and Find Full Text PDFBr J Psychol
December 2024
New York University, New York, New York, USA.
Artificial Intelligence (AI) chatbots, such as ChatGPT, have been shown to mimic individual human behaviour in a wide range of psychological and economic tasks. Do groups of AI chatbots also mimic collective behaviour? If so, artificial societies of AI chatbots may aid social scientific research by simulating human collectives. To investigate this theoretical possibility, we focus on whether AI chatbots natively mimic one commonly observed collective behaviour: homophily, people's tendency to form communities with similar others.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!