Objective: We aim to compare the capabilities of ChatGPT 3.5, Microsoft Bing, and Google Gemini in handling neuro-ophthalmological case scenarios.

Methods: Ten randomly chosen neuro-ophthalmological cases from a publicly accessible database were used to test the accuracy and suitability of all three models, and the case details were followed by the following query: "What is the most probable diagnosis?"

Results: On the basis of the accuracy of diagnosis, all three chat boxes (ChatGPT 3.5, Microsoft Bing, and Google Gemini) gave the correct diagnosis in four (40%) out of 10 cases, whereas in terms of suitability, ChatGPT 3.5, Microsoft Bing, and Google Gemini gave six (60%), five (50%), and five (50%) out of 10 case scenarios, respectively.

Conclusion: ChatGPT 3.5 performs better than the other two when it comes to handling neuro-ophthalmological case difficulties. These results highlight the potential benefits of developing artificial intelligence (AI) models for improving medical education and ocular diagnostics.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC11092423PMC
http://dx.doi.org/10.7759/cureus.58232DOI Listing

Publication Analysis

Top Keywords

chatgpt microsoft
16
microsoft bing
16
bing google
16
google gemini
16
handling neuro-ophthalmological
8
neuro-ophthalmological case
8
comparison chatgpt
4
microsoft
4
bing
4
google
4

Similar Publications

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!