A PHP Error was encountered

Severity: Warning

Message: file_get_contents(https://...@pubfacts.com&api_key=b8daa3ad693db53b1410957c26c9a51b4908): Failed to open stream: HTTP request failed! HTTP/1.1 429 Too Many Requests

Filename: helpers/my_audit_helper.php

Line Number: 144

Backtrace:

File: /var/www/html/application/helpers/my_audit_helper.php
Line: 144
Function: file_get_contents

File: /var/www/html/application/helpers/my_audit_helper.php
Line: 212
Function: simplexml_load_file_from_url

File: /var/www/html/application/helpers/my_audit_helper.php
Line: 1002
Function: getPubMedXML

File: /var/www/html/application/helpers/my_audit_helper.php
Line: 3142
Function: GetPubMedArticleOutput_2016

File: /var/www/html/application/controllers/Detail.php
Line: 575
Function: pubMedSearch_Global

File: /var/www/html/application/controllers/Detail.php
Line: 489
Function: pubMedGetRelatedKeyword

File: /var/www/html/index.php
Line: 316
Function: require_once

Comparing ChatGPT and a Single Anesthesiologist's Responses to Common Patient Questions: An Exploratory Cross-Sectional Survey of a Panel of Anesthesiologists. | LitMetric

Increased patient access to electronic medical records and resources has resulted in higher volumes of health-related questions posed to clinical staff, while physicians' rising clinical workloads have resulted in less time for comprehensive, thoughtful responses to patient questions. Artificial intelligence chatbots powered by large language models (LLMs) such as ChatGPT could help anesthesiologists efficiently respond to electronic patient inquiries, but their ability to do so is unclear. A cross-sectional exploratory survey-based study comprised of 100 anesthesia-related patient question/response sets based on two fictitious simple clinical scenarios was performed. Each question was answered by an independent board-certified anesthesiologist and ChatGPT (GPT-3.5 model, August 3, 2023 version). The responses were randomized and evaluated via survey by three blinded board-certified anesthesiologists for various quality and empathy measures. On a 5-point Likert scale, ChatGPT received similar overall quality ratings (4.2 vs. 4.1, p = .81) and significantly higher overall empathy ratings (3.7 vs. 3.4, p < .01) compared to the anesthesiologist. ChatGPT underperformed the anesthesiologist regarding rate of responses in agreement with scientific consensus (96.6% vs. 99.3%, p = .02) and possibility of harm (4.7% vs. 1.7%, p = .04), but performed similarly in other measures (percentage of responses with inappropriate/incorrect information (5.7% vs. 2.7%, p = .07) and missing information (10.0% vs. 7.0%, p = .19)). In conclusion, LLMs show great potential in healthcare, but additional improvement is needed to decrease the risk of patient harm and reduce the need for close physician oversight. Further research with more complex clinical scenarios, clinicians, and live patients is necessary to validate their role in healthcare.

Download full-text PDF

Source
http://dx.doi.org/10.1007/s10916-024-02100-zDOI Listing

Publication Analysis

Top Keywords

patient questions
8
patient
5
comparing chatgpt
4
chatgpt single
4
single anesthesiologist's
4
anesthesiologist's responses
4
responses common
4
common patient
4
questions exploratory
4
exploratory cross-sectional
4

Similar Publications

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!