AI Article Synopsis

  • The study examined the effectiveness and safety of ChatGPT-4 in managing patients with positive blood cultures in a hospital setting over four weeks.
  • ChatGPT-4 provided comprehensive management plans that matched the recommendations of infectious disease consultants in about 59% of cases, but had limitations in diagnostic accuracy and appropriate treatment options.
  • The research concluded that relying solely on ChatGPT-4 for medical advice can be risky, particularly for severe infections, highlighting the need for expert oversight.

Article Abstract

Background: The development of chatbot artificial intelligence (AI) has raised major questions about their use in healthcare. We assessed the quality and safety of the management suggested by Chat Generative Pre-training Transformer 4 (ChatGPT-4) in real-life practice for patients with positive blood cultures.

Methods: Over a 4-week period in a tertiary care hospital, data from consecutive infectious diseases (ID) consultations for a first positive blood culture were prospectively provided to ChatGPT-4. Data were requested to propose a comprehensive management plan (suspected/confirmed diagnosis, workup, antibiotic therapy, source control, follow-up). We compared the management plan suggested by ChatGPT-4 with the plan suggested by ID consultants based on literature and guidelines. Comparisons were performed by 2 ID physicians not involved in patient management.

Results: Forty-four cases with a first episode of positive blood culture were included. ChatGPT-4 provided detailed and well-written responses in all cases. AI's diagnoses were identical to those of the consultant in 26 (59%) cases. Suggested diagnostic workups were satisfactory (ie, no missing important diagnostic tests) in 35 (80%) cases; empirical antimicrobial therapies were adequate in 28 (64%) cases and harmful in 1 (2%). Source control plans were inadequate in 4 (9%) cases. Definitive antibiotic therapies were optimal in 16 (36%) patients and harmful in 2 (5%). Overall, management plans were considered optimal in only 1 patient, as satisfactory in 17 (39%), and as harmful in 7 (16%).

Conclusions: The use of ChatGPT-4 without consultant input remains hazardous when seeking expert medical advice in 2023, especially for severe IDs.

Download full-text PDF

Source
http://dx.doi.org/10.1093/cid/ciad632DOI Listing

Publication Analysis

Top Keywords

positive blood
12
chatbot artificial
8
artificial intelligence
8
infectious diseases
8
blood culture
8
management plan
8
source control
8
plan suggested
8
cases
6
management
5

Similar Publications

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!