Background: While Large Language Models (LLMs) are considered positively with respect to technological progress and abilities, people are rather opposed to machines making moral decisions. But the circumstances under which algorithm aversion or algorithm appreciation are more likely to occur with respect to LLMs have not yet been sufficiently investigated. Therefore, the aim of this study was to investigate how texts with moral or technological topics, allegedly written either by a human author or by ChatGPT, are perceived.

Methods: In a randomized controlled experiment, = 164 participants read six texts, three of which had a moral and three a technological topic (predictor text topic). The alleged author of each text was randomly either labeled "ChatGPT" or "human author" (predictor authorship). We captured three dependent variables: assessment of author competence, assessment of content quality, and participants' intention to submit the text in a hypothetical university course (sharing intention). We hypothesized interaction effects, that is, we expected ChatGPT to score lower than alleged human authors for moral topics and higher than alleged human authors for technological topics and vice versa.

Results: We only found a small interaction effect for perceived author competence, = 0.004, = 0.40, but not for the other dependent variables. However, ChatGPT was consistently devalued compared to alleged human authors across all dependent variables: there were main effects of authorship for assessment of the author competence, < 0.001, = 0.95; for assessment of content quality, < 0.001, = 0.39; as well as for sharing intention, < 0.001, = 0.57. There was also a small main effect of text topic on the assessment of text quality, = 0.002, = 0.35.

Conclusion: These results are more in line with previous findings on algorithm aversion than with algorithm appreciation. We discuss the implications of these findings for the acceptance of the use of LLMs for text composition.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC11176609PMC
http://dx.doi.org/10.3389/frai.2024.1412710DOI Listing

Publication Analysis

Top Keywords

text topic
12
dependent variables
12
author competence
12
alleged human
12
human authors
12
algorithm aversion
8
aversion algorithm
8
algorithm appreciation
8
technological topics
8
assessment author
8

Similar Publications

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!