Objective: We aim to compare the capabilities of ChatGPT 3.5, Microsoft Bing, and Google Gemini in handling neuro-ophthalmological case scenarios.
Methods: Ten randomly chosen neuro-ophthalmological cases from a publicly accessible database were used to test the accuracy and suitability of all three models, and the case details were followed by the following query: "What is the most probable diagnosis?"
Results: On the basis of the accuracy of diagnosis, all three chat boxes (ChatGPT 3.5, Microsoft Bing, and Google Gemini) gave the correct diagnosis in four (40%) out of 10 cases, whereas in terms of suitability, ChatGPT 3.5, Microsoft Bing, and Google Gemini gave six (60%), five (50%), and five (50%) out of 10 case scenarios, respectively.
Conclusion: ChatGPT 3.5 performs better than the other two when it comes to handling neuro-ophthalmological case difficulties. These results highlight the potential benefits of developing artificial intelligence (AI) models for improving medical education and ocular diagnostics.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC11092423 | PMC |
http://dx.doi.org/10.7759/cureus.58232 | DOI Listing |
Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!