Large language models and their big bullshit potential.

Ethics Inf Technol

School of English, Communication and Philosophy, Cardiff University, Cardiff, UK.

Published: October 2024

Newly powerful large language models have burst onto the scene, with applications across a wide range of functions. We can now expect to encounter their outputs at rapidly increasing volumes and frequencies. Some commentators claim that large language models are , generating convincing output without regard for the truth. If correct, that would make large language models distinctively dangerous discourse participants. Bullshitters not only undermine the norm of truthfulness (by saying false things) but the normative status of truth itself (by treating it as entirely irrelevant). So, do large language models really bullshit? I argue that they can, in the sense of issuing propositional content in response to fact-seeking prompts, without having first assessed that content for truth or falsity. However, I further argue that they bullshit, given appropriate guardrails. So, just as with human speakers, the propensity for a large language model to bullshit depends on its own particular make-up.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC11452423PMC
http://dx.doi.org/10.1007/s10676-024-09802-5DOI Listing

Publication Analysis

Top Keywords

large language
24
language models
20
large
6
models
5
language
5
models big
4
big bullshit
4
bullshit potential
4
potential newly
4
newly powerful
4

Similar Publications

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!