A PHP Error was encountered

Severity: Warning

Message: file_get_contents(https://...@gmail.com&api_key=61f08fa0b96a73de8c900d749fcb997acc09&a=1): Failed to open stream: HTTP request failed! HTTP/1.1 429 Too Many Requests

Filename: helpers/my_audit_helper.php

Line Number: 197

Backtrace:

File: /var/www/html/application/helpers/my_audit_helper.php
Line: 197
Function: file_get_contents

File: /var/www/html/application/helpers/my_audit_helper.php
Line: 271
Function: simplexml_load_file_from_url

File: /var/www/html/application/helpers/my_audit_helper.php
Line: 1057
Function: getPubMedXML

File: /var/www/html/application/helpers/my_audit_helper.php
Line: 3175
Function: GetPubMedArticleOutput_2016

File: /var/www/html/application/controllers/Detail.php
Line: 575
Function: pubMedSearch_Global

File: /var/www/html/application/controllers/Detail.php
Line: 489
Function: pubMedGetRelatedKeyword

File: /var/www/html/index.php
Line: 316
Function: require_once

Medical validity and layperson interpretation of emergency visit recommendations by the GPT model: A cross-sectional study. | LitMetric

Aim: In Japan, emergency ambulance dispatches involve minor cases requiring outpatient services, emphasizing the need for improved public guidance regarding emergency care. This study evaluated both the medical plausibility of the GPT model in aiding laypersons to determine the need for emergency medical care and the laypersons' interpretations of its outputs.

Methods: This cross-sectional study was conducted from December 10, 2023, to March 7, 2024. We input clinical scenarios into the GPT model and evaluated the need for emergency visits based on the outputs. A total of 314 scenarios were labeled with red tags (emergency, immediate emergency department [ED] visit) and 152 with green tags (less urgent). Seven medical specialists assessed the outputs' validity, and 157 laypersons interpreted them via a web-based questionnaire.

Results: Experts reported that the GPT model accurately identified important information in 95.9% (301/314) of red-tagged scenarios and recommended immediate ED visits in 96.5% (303/314). However, only 43.0% (135/314) of laypersons interpreted those outputs as indicating urgent hospital visits. The model identified important information in 99.3% (151/152) of green-tagged scenarios and advised against immediate visits in 88.8% (135/152). However, only 32.2% (49/152) of laypersons considered them routine follow-ups.

Conclusions: Expert evaluations revealed that the GPT model could be highly accurate in advising on emergency visits. However, laypersons frequently misinterpreted its recommendations, highlighting a substantial gap in understanding AI-generated medical advice.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC11897724PMC
http://dx.doi.org/10.1002/ams2.70042DOI Listing

Publication Analysis

Top Keywords

gpt model
20
emergency
8
cross-sectional study
8
emergency visits
8
laypersons interpreted
8
model
6
medical
5
gpt
5
laypersons
5
visits
5

Similar Publications

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!