ChatGPT is a new artificial intelligence system that revolutionizes the way how information can be sought and obtained. In this study, the trustworthiness, value, and danger of ChatGPT-generated responses on four vignettes that represented virtual patient questions were evaluated by 20 experts in the domain of congenital heart disease, atrial fibrillation, heart failure, or cholesterol. Experts generally considered ChatGPT-generated responses trustworthy and valuable, with few considering them dangerous. Forty percent of the experts found ChatGPT responses more valuable than Google. Experts appreciated the sophistication and nuances in the responses but also recognized that responses were often incomplete and sometimes misleading.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1093/eurjcn/zvad038 | DOI Listing |
Urogynecology (Phila)
January 2025
From the Division of Urogynecology, Walter Reed National Military Medical Center, Bethesda, MD.
Importance: Use of the publicly available Large Language Model, Chat Generative Pre-trained Transformer (ChatGPT 3.5; OpenAI, 2022), is growing in health care despite varying accuracies.
Objective: The aim of this study was to assess the accuracy and readability of ChatGPT's responses to questions encompassing surgical informed consent in urogynecology.
Objective: To analyze the accuracy of ChatGPT-generated responses to common rhinologic patient questions.
Methods: Ten common questions from rhinology patients were compiled by a panel of 4 rhinology fellowship-trained surgeons based on clinical patient experience. This panel (Panel 1) developed consensus "expert" responses to each question.
J Burn Care Res
January 2025
Department of Plastic Surgery, University of Pittsburgh Medical Center, Pittsburgh, PA 15213, United States.
Patients often use Google for their medical questions. With the emergence of artificial intelligence large language models, such as ChatGPT, patients may turn to such technologies as an alternative source of medical information. This study investigates the safety, accuracy, and comprehensiveness of medical responses provided by ChatGPT in comparison to Google for common questions about burn injuries and their management.
View Article and Find Full Text PDFEur Arch Otorhinolaryngol
December 2024
Université de Lyon, Université Lyon 1, Lyon, F-69003, France.
Purpose: The artificial intelligence (AI) chatbot ChatGPT has become a major tool for generating responses in healthcare. This study assessed ChatGPT's ability to generate French preoperative patient-facing medical information (PFI) in rhinology at a comparable level to material provided by an academic source, the French Society of Otorhinolaryngology (Société Française d'Otorhinolaryngologie et Chirurgie Cervico-Faciale, SFORL).
Methods: ChatGPT and SFORL French preoperative PFI in rhinology were compared by analyzing responses to 16 questions regarding common rhinology procedures: ethmoidectomy, sphenoidotomy, septoplasty, and endonasal dacryocystorhinostomy.
Int J Obstet Anesth
November 2024
Department of Anesthesiology, Perioperative and Pain Medicine, Harvard Medical School, Brigham and Women's Hospital, Boston, MA, United States. Electronic address:
Background: Large language models (LLMs), of which ChatGPT is the most well known, are now available to patients to seek medical advice in various languages. However, the accuracy of the information utilized to train these models remains unknown.
Methods: Ten commonly asked questions regarding labor epidurals were translated from English to Spanish, and all 20 questions were entered into ChatGPT version 3.
Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!