Purpose: To assess the performance of Chat Generative Pre-Trained Transformer (ChatGPT) when answering self-assessment exam questions in hand surgery and to compare correct results for text-only questions to those for questions that included images.

Methods: This study used 10 self-assessment exams from 2004 to 2013 provided by the American Society for Surgery of the Hand (ASSH). ChatGPT's performance on text-only questions and image-based questions was compared. The primary outcomes were ChatGPT's total score, score on text-only questions, and score on image-based questions. The secondary outcomes were the proportion of questions for which ChatGPT provided additional explanations, the length of those elaborations, and the number of questions for which ChatGPT provided answers with certainty.

Results: Out of 1,583 questions, ChatGPT answered 573 (36.2%) correct. ChatGPT performed better on text-only questions than image-based questions. Out of 1,127 text-only questions, ChatGPT answered 442 (39.2%) correctly. Out of the 456 image-based questions, it answered 131 (28.7%) correctly. There was no difference between the proportion of elaborations among text-only and image-based questions. Although there was no difference between the length of elaborations for questions ChatGPT got correct and incorrect, the length of elaborations provided for image-based questions were longer than those provided for text-only questions. Out of 1,441 confident answers, 548 (38.0%) were correct; out of 142 unconfident answers, 25 (17.6%) were correct.

Conclusions: ChatGPT performed poorly on the ASSH self-assessment exams from 2004 to 2013. It performed better on text-only questions. Even with its highest score of 42% for the year 2012, the AI platform would not have received continuing medical education credit from ASSH or the American Board of Surgery. Even when only considering questions without images, ChatGPT's high score of 44% correct would not have "passed" the examination.

Clinical Relevance: At this time, medical professionals, trainees, and patients should use ChatGPT with caution as the program has not yet developed proficiency with hand subspecialty knowledge.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC11185878PMC
http://dx.doi.org/10.1016/j.jhsg.2023.11.014DOI Listing

Publication Analysis

Top Keywords

text-only questions
28
image-based questions
24
questions
20
questions chatgpt
20
length elaborations
12
chatgpt
9
chatgpt's performance
8
hand surgery
8
self-assessment exam
8
text-only
8

Similar Publications

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!