Fact checking can be an effective strategy against misinformation, but its implementation at scale is impeded by the overwhelming volume of information online. Recent AI language models have shown impressive ability in fact-checking tasks, but how humans interact with fact-checking information provided by these models is unclear. Here, we investigate the impact of fact-checking information generated by a popular large language model (LLM) on belief in, and sharing intent of, political news headlines in a preregistered randomized control experiment. Although the LLM accurately identifies most false headlines (90%), we find that this information does not significantly improve participants' ability to discern headline accuracy or share accurate news. In contrast, viewing human-generated fact checks enhances discernment in both cases. Subsequent analysis reveals that the AI fact-checker is harmful in specific cases: It decreases beliefs in true headlines that it mislabels as false and increases beliefs in false headlines that it is unsure about. On the positive side, AI fact-checking information increases the sharing intent for correctly labeled true headlines. When participants are given the option to view LLM fact checks and choose to do so, they are significantly more likely to share both true and false news but only more likely to believe false headlines. Our findings highlight an important source of potential harm stemming from AI applications and underscore the critical need for policies to prevent or mitigate such unintended consequences.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC11648662PMC
http://dx.doi.org/10.1073/pnas.2322823121DOI Listing

Publication Analysis

Top Keywords

false headlines
12
large language
8
language models
8
sharing intent
8
fact checks
8
true headlines
8
headlines
6
fact-checking
5
false
5
fact-checking large
4

Similar Publications

Fact checking can be an effective strategy against misinformation, but its implementation at scale is impeded by the overwhelming volume of information online. Recent AI language models have shown impressive ability in fact-checking tasks, but how humans interact with fact-checking information provided by these models is unclear. Here, we investigate the impact of fact-checking information generated by a popular large language model (LLM) on belief in, and sharing intent of, political news headlines in a preregistered randomized control experiment.

View Article and Find Full Text PDF

One explanation for why people accept ideologically welcome misinformation is that they are insincere. Consistent with the insincerity hypothesis, past experiments have demonstrated that bias in the veracity assessment of publicly reported statistics and debunked news headlines often diminishes considerably when accuracy is incentivized. Many statements encountered online, however, constitute previously unseen claims that are difficult to evaluate the veracity of.

View Article and Find Full Text PDF

Sharing without clicking on news in social media.

Nat Hum Behav

November 2024

Social Science Research Institute and Population Research Institute, Pennsylvania State University, University Park, PA, USA.

Social media have enabled laypersons to disseminate, at scale, links to news and public affairs information. Many individuals share such links without first reading the linked information. Here we analysed over 35 million public Facebook posts with uniform resource locators shared between 2017 and 2020, and discovered that such 'shares without clicks' (SwoCs) constitute around 75% of forwarded links.

View Article and Find Full Text PDF

Nearly five billion people use and receive news through social media and there is widespread concern about the negative consequences of misinformation on social media (e.g., election interference, vaccine hesitancy).

View Article and Find Full Text PDF

Misinformation is a major focus of intervention efforts. Psychological inoculation-an intervention intended to help people identify manipulation techniques-is being adopted at scale around the globe. Yet the efficacy of this approach for increasing belief accuracy remains unclear, as prior work uses synthetic materials that do not contain claims of truth.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!