Objectives: To evaluate the performance of a Generative Pre-trained Transformer (GPT) in generating scientific abstracts in dentistry.
Methods: Ten scientific articles in dental radiology had their original abstracts collected, while another 10 articles had their methodology and results added to a ChatGPT prompt to generate an abstract. All abstracts were randomised and compiled into a single file for subsequent assessment. Five evaluators classified whether the abstract was generated by a human using a 5-point scale and provided justifications within seven aspects: formatting, information accuracy, orthography, punctuation, terminology, text fluency, and writing style. Furthermore, an online GPT detector provided "Human Score" values, and a plagiarism detector assessed similarity with existing literature.
Results: Sensitivity values for detecting human writing ranged from 0.20 to 0.70, with a mean of 0.58; specificity values ranged from 0.40 to 0.90, with a mean of 0.62; and accuracy values ranged from 0.50 to 0.80, with a mean of 0.60. Orthography and Punctuation were the most indicated aspects for the abstract generated by ChatGPT. The GPT detector revealed confidence levels for a "Human Score" of 16.9% for the AI-generated texts and plagiarism levels averaging 35%.
Conclusion: The GPT exhibited commendable performance in generating scientific abstracts when evaluated by humans, as the generated abstracts were indistinguishable from those generated by humans. When evaluated by an online GPT detector, the use of GPT became apparent.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1111/eje.13057 | DOI Listing |
Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!