Large language models (LLMs) and the institutionalization of misinformation.

Trends Cogn Sci

Psychological and Brain Sciences, Fairfield University, Fairfield, CT, USA.

Published: December 2024

Large language models (LLMs), such as ChatGPT, flood the Internet with true and false information, crafted and delivered with techniques that psychological science suggests will encourage people to think that information is true. What's more, as people feed this misinformation back into the Internet, emerging LLMs will adopt it and feed it back in other models. Such a scenario means we could lose access to information that helps us tell what is real from unreal - to do 'reality monitoring.' If that happens, misinformation will be the new foundation we use to plan, to make decisions, and to vote. We will lose trust in our institutions and each other.

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.tics.2024.08.007DOI Listing

Publication Analysis

Top Keywords

large language
8
language models
8
models llms
8
llms institutionalization
4
institutionalization misinformation
4
misinformation large
4
llms chatgpt
4
chatgpt flood
4
flood internet
4
internet true
4

Similar Publications

Background: Health misinformation undermines responses to health crises, with social media amplifying the issue. Although organizations work to correct misinformation, challenges persist due to reasons such as the difficulty of effectively sharing corrections and information being overwhelming. At the same time, social media offers valuable interactive data, enabling researchers to analyze user engagement with health misinformation corrections and refine content design strategies.

View Article and Find Full Text PDF

Purpose Of Review: Artificial intelligence (AI) offers a new frontier for aiding in the management of both acute and chronic pain, which may potentially transform opioid prescribing practices and addiction prevention strategies. In this review paper, not only do we discuss some of the current literature around predicting various opioid-related outcomes, but we also briefly point out the next steps to improve trustworthiness of these AI models prior to real-time use in clinical workflow.

Recent Findings: Machine learning-based predictive models for identifying risk for persistent postoperative opioid use have been reported for spine surgery, knee arthroplasty, hip arthroplasty, arthroscopic joint surgery, outpatient surgery, and mixed surgical populations.

View Article and Find Full Text PDF

Biological, linguistic, and individual factors govern voice qualitya).

J Acoust Soc Am

January 2025

USC Viterbi School of Engineering, University of Southern California, Los Angeles, California 90089-1455, USA.

Voice quality serves as a rich source of information about speakers, providing listeners with impressions of identity, emotional state, age, sex, reproductive fitness, and other biologically and socially salient characteristics. Understanding how this information is transmitted, accessed, and exploited requires knowledge of the psychoacoustic dimensions along which voices vary, an area that remains largely unexplored. Recent studies of English speakers have shown that two factors related to speaker size and arousal consistently emerge as the most important determinants of quality, regardless of who is speaking.

View Article and Find Full Text PDF

Background: Artificial intelligence (AI) has become widely applied across many fields, including medical education. Content validation and its answers are based on training datasets and the optimization of each model. The accuracy of large language model (LLMs) in basic medical examinations and factors related to their accuracy have also been explored.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!