Aims: To develop an algorithm to classify multiple retinal pathologies accurately and reliably from fundus photographs and to validate its performance against human experts.
Methods: We trained a deep convolutional ensemble (DCE), an ensemble of five convolutional neural networks (CNNs), to classify retinal fundus photographs into diabetic retinopathy (DR), glaucoma, age-related macular degeneration (AMD) and normal eyes. The CNN architecture was based on the InceptionV3 model, and initial weights were pretrained on the ImageNet dataset. We used 43 055 fundus images from 12 public datasets. Five trained ensembles were then tested on an 'unseen' set of 100 images. Seven board-certified ophthalmologists were asked to classify these test images.
Results: Board-certified ophthalmologists achieved a mean accuracy of 72.7% over all classes, while the DCE achieved a mean accuracy of 79.2% (p=0.03). The DCE had a statistically significant higher mean F1-score for DR classification compared with the ophthalmologists (76.8% vs 57.5%; p=0.01) and greater but statistically non-significant mean F1-scores for glaucoma (83.9% vs 75.7%; p=0.10), AMD (85.9% vs 85.2%; p=0.69) and normal eyes (73.0% vs 70.5%; p=0.39). The DCE had a greater mean agreement between accuracy and confident of 81.6% vs 70.3% (p<0.001).
Discussion: We developed a deep learning model and found that it could more accurately and reliably classify four categories of fundus images compared with board-certified ophthalmologists. This work provides proof-of-principle that an algorithm is capable of accurate and reliable recognition of multiple retinal diseases using only fundus photographs.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC10894841 | PMC |
http://dx.doi.org/10.1136/bjo-2022-322183 | DOI Listing |
J Vet Med Educ
November 2024
Department of Clinical Studies, Ontario Veterinary College, University of Guelph, 50 Stone Rd E, Guelph, ON N1G 2W1.
Reports regarding curricula in ophthalmology across veterinary schools are not currently available. The objective of this study was therefore to investigate the number of contact hours and approaches to teaching ophthalmology in the curriculum of English-speaking veterinary schools worldwide. An online survey was distributed to 51 veterinary colleges in North America, the United Kingdom, Australia, New Zealand, and the Caribbean.
View Article and Find Full Text PDFJ Med Internet Res
December 2024
School of Optometry, The Hong Kong Polytechnic University, Hong Kong, China.
Background: Large language models (LLMs) have the potential to enhance clinical flow and improve medical education, but they encounter challenges related to specialized knowledge in ophthalmology.
Objective: This study aims to enhance ophthalmic knowledge by refining a general LLM into an ophthalmology-specialized assistant for patient inquiries and medical education.
Methods: We transformed Llama2 into an ophthalmology-specialized LLM, termed EyeGPT, through the following 3 strategies: prompt engineering for role-playing, fine-tuning with publicly available data sets filtered for eye-specific terminology (83,919 samples), and retrieval-augmented generation leveraging a medical database and 14 ophthalmology textbooks.
Clin Ophthalmol
November 2024
Department of Ophthalmology, Wake Forest University School of Medicine, Winston-Salem, NC, USA.
Purpose: Compare large language models (LLMs) in analyzing and responding to a difficult series of ophthalmic cases.
Design: A comparative case series involving LLMs that met inclusion criteria tested on twenty difficult case studies posed in open-text format.
Methods: Fifteen LLMs accessible to ophthalmologists were tested against twenty case studies published in JAMA Ophthalmology.
J Pediatr Ophthalmol Strabismus
October 2024
Purpose: To analyze pediatric ophthalmology-related information on Instagram (Meta Platforms, Inc).
Methods: A cross-sectional study queried 112 common eye terms and conditions from the American Association for Pediatric Ophthalmology and Strabismus website as hashtags on Instagram. A categorical classification system was used to analyze the top 9 posts per hashtag for likes, comments, views, and engagement level ratio (ELR).
Ophthalmol Sci
August 2024
Viterbi Family Department of Ophthalmology, Shiley Eye Institute, University of California, San Diego, La Jolla, California.
Objective: Large language models such as ChatGPT have demonstrated significant potential in question-answering within ophthalmology, but there is a paucity of literature evaluating its ability to generate clinical assessments and discussions. The objectives of this study were to (1) assess the accuracy of assessment and plans generated by ChatGPT and (2) evaluate ophthalmologists' abilities to distinguish between responses generated by clinicians versus ChatGPT.
Design: Cross-sectional mixed-methods study.
Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!