We report results from simultaneous experiments conducted in late 2022 in Belarus, Estonia, Kazakhstan, Russia and Ukraine. The experiments focus on fact-checking misinformation supportive of Russia in the Russia-Ukraine War. Meta-analysis makes clear that fact-checking misinformation reduces belief in pro-Kremlin false claims. Effects of fact-checks are not uniform across countries; our meta-analytic estimate is reliant on belief accuracy increases observed in Russia and Ukraine. While fact-checks improve belief accuracy, they do not change respondents' attitudes about which side to support in the War. War does not render individuals hopelessly vulnerable to misinformation-but fact-checking misinformation is unlikely to change their views toward the conflict.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC11419341PMC
http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0307090PLOS

Publication Analysis

Top Keywords

fact-checking misinformation
12
russia-ukraine war
8
change views
8
russia ukraine
8
belief accuracy
8
war
5
correcting misinformation
4
misinformation russia-ukraine
4
war reduces
4
reduces false
4

Similar Publications

In the fast-paced, densely populated information landscape shaped by digitization, distinguishing information from misinformation is critical. Fact-checkers are effective in fighting fake news but face challenges such as cognitive overload and time pressure, which increase susceptibility to cognitive biases. Establishing standards to mitigate these biases can improve the quality of fact-checks, bolster audience trust, and protect against reputation attacks from disinformation actors.

View Article and Find Full Text PDF

Background: The proliferation of generative artificial intelligence (AI), such as ChatGPT, has added complexity and richness to the virtual environment by increasing the presence of AI-generated content (AIGC). Although social media platforms such as TikTok have begun labeling AIGC to facilitate the ability for users to distinguish it from human-generated content, little research has been performed to examine the effect of these AIGC labels.

Objective: This study investigated the impact of AIGC labels on perceived accuracy, message credibility, and sharing intention for misinformation through a web-based experimental design, aiming to refine the strategic application of AIGC labels.

View Article and Find Full Text PDF

Political conflict is an essential element of democratic systems, but can also threaten their existence if it becomes too intense. This happens particularly when most political issues become aligned along the same major fault line, splitting society into two antagonistic camps. In the 20th century, major fault lines were formed by structural conflicts, like owners vs.

View Article and Find Full Text PDF

Fact checking can be an effective strategy against misinformation, but its implementation at scale is impeded by the overwhelming volume of information online. Recent AI language models have shown impressive ability in fact-checking tasks, but how humans interact with fact-checking information provided by these models is unclear. Here, we investigate the impact of fact-checking information generated by a popular large language model (LLM) on belief in, and sharing intent of, political news headlines in a preregistered randomized control experiment.

View Article and Find Full Text PDF

Believing conspiracy narratives is frequently assumed to be a major cause of vaccine hesitancy, i.e., the tendency to forgo vaccination despite its availability.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!