Purpose: This scoping review aims to explore the current applications of ChatGPT in the retina field, highlighting its potential, challenges, and limitations.
Methods: A comprehensive literature search was conducted across multiple databases, including PubMed, Scopus, MEDLINE, and Embase, to identify relevant articles published from 2022 onwards. The inclusion criteria focused on studies evaluating the use of ChatGPT in retinal healthcare. Data were extracted and synthesized to map the scope of ChatGPT's applications in retinal care, categorizing articles into various practical application areas such as academic research, charting, coding, diagnosis, disease management, and patient counseling.
Results: A total of 68 articles were included in the review, distributed across several categories: 8 related to academics and research, 5 to charting, 1 to coding and billing, 44 to diagnosis, 49 to disease management, 2 to literature consulting, 23 to medical education, and 33 to patient counseling. Many articles were classified into multiple categories due to overlapping topics. The findings indicate that while ChatGPT shows significant promise in areas such as medical education and diagnostic support, concerns regarding accuracy, reliability, and the potential for misinformation remain prevalent.
Conclusion: ChatGPT offers substantial potential in advancing retinal healthcare by supporting clinical decision-making, enhancing patient education, and automating administrative tasks. However, its current limitations, particularly in clinical accuracy and the risk of generating misinformation, necessitate cautious integration into practice, with continuous oversight from healthcare professionals. Future developments should focus on improving accuracy, incorporating up-to-date medical guidelines, and minimizing the risks associated with AI-driven healthcare tools.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC11487877 | PMC |
http://dx.doi.org/10.1186/s40942-024-00595-9 | DOI Listing |
Heliyon
January 2025
Department of Medical Laboratory Sciences, Faculty of Applied Medical Sciences, Jordan University of Science and Technology, Irbid, 22110, Jordan.
Background: Artificial intelligence (AI) technologies are increasingly recognized for their potential to revolutionize research practices. However, there is a gap in understanding the perspectives of MENA researchers on ChatGPT. This study explores the knowledge, attitudes, and perceptions of ChatGPT utilization in research.
View Article and Find Full Text PDFJ Phys Ther Educ
January 2025
Introduction: This study examines the ability of human readers, recurrence quantification analysis (RQA), and an online artificial intelligence (AI) detection tool (GPTZero) to distinguish between AI-generated and human-written personal statements in physical therapist education program applications.
Review Of Literature: The emergence of large language models such as ChatGPT and Google Gemini has raised concerns about the authenticity of personal statements. Previous studies have reported varying degrees of success in detecting AI-generated text.
Spine J
January 2025
Department of Spine Surgery and Orthopaedics, Xiangya Hospital, Central South University, Xiangya Road 87, Changsha 410008, China; National Clinical Research Center for Geriatric Disorders, Xiangya Hospital, Central South University, Xiangya Road 87, Changsha 410008, China. Electronic address:
Background: In clinical practice, distinguishing between spinal tuberculosis (STB) and spinal tumors (ST) poses a significant diagnostic challenge. The application of AI-driven large language models (LLMs) shows great potential for improving the accuracy of this differential diagnosis.
Purpose: To evaluate the performance of various machine learning models and ChatGPT-4 in distinguishing between STB and ST.
Cureus
January 2025
Pedodontics, Istanbul Turkuaz Dental Clinic, Istanbul, TUR.
Artificial intelligence (AI) has emerged as a transformative tool in education, particularly in specialized fields such as dentistry. This study evaluated the performance of four advanced AI models - ChatGPT-4o (San Francisco, CA: OpenAI), ChatGPT-o1, Gemini 1.5 Pro (Mountain View, CA: Google LLC), and Gemini 2.
View Article and Find Full Text PDFEur J Obstet Gynecol Reprod Biol
January 2025
Department of Gynecology and Obstetrics, University Clinic Erlangen, Comprehensive Cancer Center Erlangen-EMN (CCC ER-EMN), Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU), Erlangen, Germany. Electronic address:
Objective: To investigate the potential of artificial intelligence (AI) in emergency medicine, focusing on its utility in triaging and managing acute gynecologic and obstetric emergencies.
Methods And Materials: This feasibility study assessed Chat-GPT's performance in triaging and recommending management interventions for gynecologic and obstetric emergencies, using ten fictive cases. Five common conditions were modeled for each specialty.
Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!