Background: Large language models (LLMs) are becoming increasingly popular and are playing an important role in providing accurate clinical information to both patients and physicians. This study aimed to investigate the effectiveness of ChatGPT-4.0, Google Gemini, and Microsoft Copilot LLMs for responding to patient questions regarding refractive surgery.
Methods: The LLMs' responses to 25 questions about refractive surgery, which are frequently asked by patients, were evaluated by two ophthalmologists using a 5-point Likert scale, with scores ranging from 1 to 5. Furthermore, the DISCERN scale was used to assess the reliability of the language models' responses, whereas the Flesch Reading Ease and Flesch-Kincaid Grade Level indices were used to evaluate readability.
Results: Significant differences were found among all three LLMs in the Likert scores (p = 0.022). Pairwise comparisons revealed that ChatGPT-4.0's Likert score was significantly higher than that of Microsoft Copilot, while no significant difference was found when compared to Google Gemini (p = 0.005 and p = 0.087, respectively). In terms of reliability, ChatGPT-4.0 stood out, receiving the highest DISCERN scores among the three LLMs. However, in terms of readability, ChatGPT-4.0 received the lowest score.
Conclusions: ChatGPT-4.0's responses to inquiries regarding refractive surgery were more intricate for patients compared to other language models; however, the information provided was more dependable and accurate.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1016/j.ijmedinf.2025.105787 | DOI Listing |
Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!