Large language models (LLMs), such as ChatGPT, flood the Internet with true and false information, crafted and delivered with techniques that psychological science suggests will encourage people to think that information is true. What's more, as people feed this misinformation back into the Internet, emerging LLMs will adopt it and feed it back in other models. Such a scenario means we could lose access to information that helps us tell what is real from unreal - to do 'reality monitoring.' If that happens, misinformation will be the new foundation we use to plan, to make decisions, and to vote. We will lose trust in our institutions and each other.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1016/j.tics.2024.08.007 | DOI Listing |
J Med Internet Res
January 2025
Institute of Learning Sciences and Technologies, National Tsing Hua University, Hsinchu, Taiwan.
Background: Health misinformation undermines responses to health crises, with social media amplifying the issue. Although organizations work to correct misinformation, challenges persist due to reasons such as the difficulty of effectively sharing corrections and information being overwhelming. At the same time, social media offers valuable interactive data, enabling researchers to analyze user engagement with health misinformation corrections and refine content design strategies.
View Article and Find Full Text PDFCurr Pain Headache Rep
January 2025
Division of Perioperative Informatics, Department of Anesthesiology, University of California, San Diego, La Jolla, CA, USA.
Purpose Of Review: Artificial intelligence (AI) offers a new frontier for aiding in the management of both acute and chronic pain, which may potentially transform opioid prescribing practices and addiction prevention strategies. In this review paper, not only do we discuss some of the current literature around predicting various opioid-related outcomes, but we also briefly point out the next steps to improve trustworthiness of these AI models prior to real-time use in clinical workflow.
Recent Findings: Machine learning-based predictive models for identifying risk for persistent postoperative opioid use have been reported for spine surgery, knee arthroplasty, hip arthroplasty, arthroscopic joint surgery, outpatient surgery, and mixed surgical populations.
J Acoust Soc Am
January 2025
USC Viterbi School of Engineering, University of Southern California, Los Angeles, California 90089-1455, USA.
Voice quality serves as a rich source of information about speakers, providing listeners with impressions of identity, emotional state, age, sex, reproductive fitness, and other biologically and socially salient characteristics. Understanding how this information is transmitted, accessed, and exploited requires knowledge of the psychoacoustic dimensions along which voices vary, an area that remains largely unexplored. Recent studies of English speakers have shown that two factors related to speaker size and arousal consistently emerge as the most important determinants of quality, regardless of who is speaking.
View Article and Find Full Text PDFJMIR Med Educ
January 2025
Institute of Medicine, Suranaree University of Technology, 111 University Avenue, Nakhon Ratchasima, 30000, Thailand, 66 44223956.
Background: Artificial intelligence (AI) has become widely applied across many fields, including medical education. Content validation and its answers are based on training datasets and the optimization of each model. The accuracy of large language model (LLMs) in basic medical examinations and factors related to their accuracy have also been explored.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!