Introduction: Large language models (LLMs) such as ChatGPT can potentially transform the delivery of health information. This study aims to evaluate the accuracy and completeness of ChatGPT in responding to questions on recombinant zoster vaccination (RZV) in patients with rheumatic and musculoskeletal diseases.
Methods: A cross-sectional study was conducted using 20 prompts based on information from the Centers for Disease Control and Prevention (CDC), the Advisory Committee on Immunization Practices (ACIP), and the American College of Rheumatology (ACR). These prompts were inputted into ChatGPT 3.5. Five rheumatologists independently scored the ChatGPT responses for accuracy (Likert 1 to 5) and completeness (Likert 1 to 3) compared with validated information sources (CDC, ACIP, and ACR).
Results: The overall mean accuracy of ChatGPT responses on a 5-point scale was 4.04, with 80% of responses scoring ≥4. The mean completeness score of ChatGPT response on a 3-point scale was 2.3, with 95% of responses scoring ≥2. Among the 5 raters, ChatGPT unanimously scored with high accuracy and completeness to various patient and physician questions surrounding RZV. There was one instance where it scored with low accuracy and completeness. Although not significantly different, ChatGPT demonstrated the highest accuracy and completeness in answering questions related to ACIP guidelines compared with other information sources.
Conclusions: ChatGPT exhibits promising ability to address specific queries regarding RZV for rheumatic and musculoskeletal disease patients. However, it is essential to approach ChatGPT with caution due to risk of misinformation. This study emphasizes the importance of rigorously validating LLMs as a health information source.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1097/RHU.0000000000002198 | DOI Listing |
Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!