Background  AI language models have been shown to achieve a passing score on certain imageless diagnostic tests of the USMLE. However, they have failed certain specialty-specific examinations. This suggests there may be a difference in AI ability by medical topic or question difficulty. This study evaluates the performance of two versions of ChatGPT, a popular language-based AI model, on USMLE-style questions across various medical topics.  Methods  A total of 900 USMLE-style multiple-choice questions were equally divided into 18 topics, categorized by exam type (step 1 vs. step 2), and copied from AMBOSS, a medical learning resource with large question banks. Questions that contained images, charts, and tables were excluded due to current AI capabilities. The questions were entered into ChatGPT-3.5 (version September 25, 2023) and ChatGPT-4 (version April 2023) for multiple trials, and performance data were recorded. The two AI models were compared against human test takers (AMBOSS users) by medical topic and question difficulty.  Results  Chat-GPT-4, AMBOSS users, and Chat-GPT-3.5 had accuracies of 71.33%, 54.38%, and 46.23% respectively. When comparing models, GPT-4 was a significant improvement demonstrating a 25% greater accuracy and 8% higher concordance between trials than GPT-3 (p<.001). The performance of GPT models was similar between step 1 and step 2 content. Both GPT-3.5 and GPT-4 varied performance by medical topic (p=.027, p=.002). However, there was no clear pattern of variation. Performance for both GPT models and AMBOSS users declined as question difficulty increased (p<.001). However, the decline in accuracy was less pronounced for GPT-4. The accuracy of the GPT models showed less variability with question difficulty compared to AMBOSS users, with the average drop in accuracy from the easiest to hardest questions being 45% and 62%, respectively. Discussion  ChatGPT-4 shows significant improvement over its predecessor, ChatGPT-3.5, in the medical education setting. It is the first ChatGPT model to surpass human performance on modified AMBOSS USMLE tests. While there was variation in performance by medical topic for both models, there was no clear pattern of discrepancy. ChatGPT-4's improved accuracy, concordance, performance on difficult questions, and consistency across topics are promising for its reliability and utility for medical learners.  Conclusion  ChatGPT-4's improvements highlight its potential as a valuable tool in medical education, surpassing human performance in some areas. The lack of a clear performance pattern by medical topic suggests that variability is more related to question complexity than specific knowledge gaps.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC11756343PMC
http://dx.doi.org/10.7759/cureus.76309DOI Listing

Publication Analysis

Top Keywords

usmle-style questions
8
medical topic
8
topic question
8
amboss users
8
questions
5
advancements medical
4
medical education
4
education assessing
4
assessing chatgpt's
4
chatgpt's performance
4

Similar Publications

Background  AI language models have been shown to achieve a passing score on certain imageless diagnostic tests of the USMLE. However, they have failed certain specialty-specific examinations. This suggests there may be a difference in AI ability by medical topic or question difficulty.

View Article and Find Full Text PDF

Despite extensive studies on large language models and their capability to respond to questions from various licensed exams, there has been limited focus on employing chatbots for specific subjects within the medical curriculum, specifically medical neuroscience. This research compared the performances of Claude 3.5 Sonnet (Anthropic), GPT-3.

View Article and Find Full Text PDF

The United States Medical Licensing Examination (USMLE) is a critical step in assessing the competence of future physicians, yet the process of creating exam questions and study materials is both time-consuming and costly. While Large Language Models (LLMs), such as OpenAI's GPT-4, have demonstrated proficiency in answering medical exam questions, their potential in generating such questions remains underexplored. This study presents QUEST-AI, a novel system that utilizes LLMs to (1) generate USMLE-style questions, (2) identify and flag incorrect questions, and (3) correct errors in the flagged questions.

View Article and Find Full Text PDF

The increasing application of generative artificial intelligence large language models (LLMs) in various fields, including medical education, raises questions about their accuracy. The primary aim of our study was to undertake a detailed comparative analysis of the proficiencies and accuracies of six different LLMs (ChatGPT-4, ChatGPT-3.5-turbo, ChatGPT-3.

View Article and Find Full Text PDF

ChatGPT Performs Worse on USMLE-Style Ethics Questions Compared to Medical Knowledge Questions.

Appl Clin Inform

October 2024

Division of Allergy/Immunology, Albert Einstein College of Medicine, Montefiore Medical Center, Bronx, New York, United States.

Objectives:  The main objective of this study is to evaluate the ability of the Large Language Model Chat Generative Pre-Trained Transformer (ChatGPT) to accurately answer the United States Medical Licensing Examination (USMLE) board-style medical ethics questions compared to medical knowledge-based questions. This study has the additional objectives of comparing the overall accuracy of GPT-3.5 to GPT-4 and assessing the variability of responses given by each version.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!