Purpose: It was aimed to determine the knowledge level of ChatGPT, Bing, and Bard artificial intelligence programs related to corneal, conjunctival, and eyelid diseases and treatment modalities, to examine their reliability and superiority to each other.
Methods: Forty-one questions related to corneal, conjunctival, and eyelid diseases and treatment modalities were asked to the ChatGPT, Bing, and Bard chatbots. The answers to the questions were compared with the answer keys and grouped as correct or incorrect. Accuracy rates were compared.
Results: ChatGPT gave the correct answer to 51.2 % of the questions asked, Bing gave the correct answer to 53.7 %, and Bard gave the correct answer to 68.3 %. There was no significant difference in the rate of correct or incorrect answers to the questions asked for the 3 artificial intelligence chatbots (p = 0.208, Pearson's chi-square test).
Conclusion: Although information about the cornea, conjunctiva, and eyelid diseases and treatment modalities can be accessed quickly and accurately using up-to-date artificial intelligence programs, the answers may not always be accurate and up-to-date. Care should be taken when evaluating this information.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1016/j.clae.2024.102125 | DOI Listing |
Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!