We report results from simultaneous experiments conducted in late 2022 in Belarus, Estonia, Kazakhstan, Russia and Ukraine. The experiments focus on fact-checking misinformation supportive of Russia in the Russia-Ukraine War. Meta-analysis makes clear that fact-checking misinformation reduces belief in pro-Kremlin false claims. Effects of fact-checks are not uniform across countries; our meta-analytic estimate is reliant on belief accuracy increases observed in Russia and Ukraine. While fact-checks improve belief accuracy, they do not change respondents' attitudes about which side to support in the War. War does not render individuals hopelessly vulnerable to misinformation-but fact-checking misinformation is unlikely to change their views toward the conflict.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC11419341 | PMC |
http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0307090 | PLOS |
Front Psychol
December 2024
Department of Communication and Media, University of Liverpool, Liverpool, United Kingdom.
In the fast-paced, densely populated information landscape shaped by digitization, distinguishing information from misinformation is critical. Fact-checkers are effective in fighting fake news but face challenges such as cognitive overload and time pressure, which increase susceptibility to cognitive biases. Establishing standards to mitigate these biases can improve the quality of fact-checks, bolster audience trust, and protect against reputation attacks from disinformation actors.
View Article and Find Full Text PDFJMIR Form Res
December 2024
School of Journalism and Communication, Beijing Normal University, Beijing, China.
Background: The proliferation of generative artificial intelligence (AI), such as ChatGPT, has added complexity and richness to the virtual environment by increasing the presence of AI-generated content (AIGC). Although social media platforms such as TikTok have begun labeling AIGC to facilitate the ability for users to distinguish it from human-generated content, little research has been performed to examine the effect of these AIGC labels.
Objective: This study investigated the impact of AIGC labels on perceived accuracy, message credibility, and sharing intention for misinformation through a web-based experimental design, aiming to refine the strategic application of AIGC labels.
PNAS Nexus
December 2024
Complexity Science Hub, Vienna 1080, Austria.
Political conflict is an essential element of democratic systems, but can also threaten their existence if it becomes too intense. This happens particularly when most political issues become aligned along the same major fault line, splitting society into two antagonistic camps. In the 20th century, major fault lines were formed by structural conflicts, like owners vs.
View Article and Find Full Text PDFProc Natl Acad Sci U S A
December 2024
Observatory on Social Media, Indiana University, Bloomington, IN 47408.
Fact checking can be an effective strategy against misinformation, but its implementation at scale is impeded by the overwhelming volume of information online. Recent AI language models have shown impressive ability in fact-checking tasks, but how humans interact with fact-checking information provided by these models is unclear. Here, we investigate the impact of fact-checking information generated by a popular large language model (LLM) on belief in, and sharing intent of, political news headlines in a preregistered randomized control experiment.
View Article and Find Full Text PDFBMC Public Health
November 2024
Institute for Planetary Health Behavior, Health Communication, University of Erfurt, Erfurt, Germany.
Believing conspiracy narratives is frequently assumed to be a major cause of vaccine hesitancy, i.e., the tendency to forgo vaccination despite its availability.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!