Introduction: The emergence of artificial intelligence (AI) chat programs has opened two distinct paths, one enhancing interaction and another potentially replacing personal understanding. Ethical and legal concerns arise due to the rapid development of these programs. This paper investigates academic discussions on AI in medicine, analyzing the context, frequency, and reasons behind these conversations.
Methods: The study collected data from the Web of Science database on articles containing the keyword "ChatGPT" published from January to September 2023, resulting in 786 medically related journal articles. The inclusion criteria were peer-reviewed articles in English related to medicine.
Results: The United States led in publications (38.1%), followed by India (15.5%) and China (7.0%). Keywords such as "patient" (16.7%), "research" (12%), and "performance" (10.6%) were prevalent. The Cureus Journal of Medical Science (11.8%) had the most publications, followed by the Annals of Biomedical Engineering (8.3%). August 2023 had the highest number of publications (29.3%), with significant growth between February to March and April to May. Medical General Internal (21.0%) was the most common category, followed by Surgery (15.4%) and Radiology (7.9%).
Discussion: The prominence of India in ChatGPT research, despite lower research funding, indicates the platform's popularity and highlights the importance of monitoring its use for potential medical misinformation. China's interest in ChatGPT research suggests a focus on Natural Language Processing (NLP) AI applications, despite public bans on the platform. Cureus' success in publishing ChatGPT articles can be attributed to its open-access, rapid publication model. The study identifies research trends in plastic surgery, radiology, and obstetric gynecology, emphasizing the need for ethical considerations and reliability assessments in the application of ChatGPT in medical practice.
Conclusion: ChatGPT's presence in medical literature is growing rapidly across various specialties, but concerns related to safety, privacy, and accuracy persist. More research is needed to assess its suitability for patient care and implications for non-medical use. Skepticism and thorough review of research are essential, as current studies may face retraction as more information emerges.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC10993428 | PMC |
http://dx.doi.org/10.1186/s12245-024-00624-2 | DOI Listing |
Am J Health Promot
January 2025
College of Social Work, University of South Carolina, Columbia, SC, USA.
Purpose: Artificially Intelligent (AI) chatbots have the potential to produce information to support shared prostate cancer (PrCA) decision-making. Therefore, our purpose was to evaluate and compare the accuracy, completeness, readability, and credibility of responses from standard and advanced versions of popular chatbots: ChatGPT-3.5, ChatGPT-4.
View Article and Find Full Text PDFTransl Vis Sci Technol
January 2025
Glaucoma Service, Wills Eye Hospital, Philadelphia, PA, USA.
Purpose: The integration of artificial intelligence (AI), particularly deep learning (DL), with optical coherence tomography (OCT) offers significant opportunities in the diagnosis and management of glaucoma. This article explores the application of various DL models in enhancing OCT capabilities and addresses the challenges associated with their clinical implementation.
Methods: A review of articles utilizing DL models was conducted, including convolutional neural networks (CNNs), recurrent neural networks (RNNs), generative adversarial networks (GANs), autoencoders, and large language models (LLMs).
Ophthalmic Physiol Opt
January 2025
ISEC LISBOA-Instituto Superior de Educação e Ciências, Lisbon, Portugal.
Purpose: The purpose of this study was to compare the perception and understanding of the information provided by ChatGPT regarding myopia among optometry students, optometrists undertaking a Master degree and practicing optometrists.
Methods: This was a cross-sectional descriptive study using a structured questionnaire distributed via Wooclap to 225 participants (125 optometry students, 21 Masters students and 79 practicing optometrists). All participants evaluated the responses generated by ChatGPT Version 4.
Dent Traumatol
January 2025
Department of Pediatric Dentistry, Dentistry Faculty, Bolu Abant İzzet Baysal University, Bolu, Turkey.
Background/aim: The use of AI-driven chatbots for accessing medical information is increasingly popular among educators and students. This study aims to assess two different ChatGPT models-ChatGPT 3.5 and ChatGPT 4.
View Article and Find Full Text PDFUpdates Surg
January 2025
Department of Surgery, St. Paul's Hospital, 1081 Burrard St., Vancouver, BC, V6Z 1Y6, Canada.
This study aims to analyze the accuracy of human reviewers in identifying scientific abstracts generated by ChatGPT compared to the original abstracts. Participants completed an online survey presenting two research abstracts: one generated by ChatGPT and one original abstract. They had to identify which abstract was generated by AI and provide feedback on their preference and perceptions of AI technology in academic writing.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!