AI Article Synopsis

  • Large language models (LLMs) like ChatGPT struggle with private data interpretation, specifically electronic health records (EHRs), but prompt engineering could improve their accuracy.
  • Through systematic testing of prompt techniques on 490 EHR notes, the study found that structured prompts significantly enhanced LLM accuracy from 64.3% to 91.4%, outperforming traditional natural language processing methods.
  • The results indicate that LLMs, with proper prompt strategies, can effectively identify clinical insights from EHRs without requiring expert knowledge, suggesting potential applications in other fields for automated data analysis.

Article Abstract

Background: Large language models (LLMs), such as ChatGPT, excel at interpreting unstructured data from public sources, yet are limited when responding to queries on private repositories, such as electronic health records (EHRs). We hypothesized that prompt engineering could enhance the accuracy of LLMs for interpreting EHR data without requiring domain knowledge, thus expanding their utility for patients and personalized diagnostics.

Methods: We designed and systematically tested prompt engineering techniques to improve the ability of LLMs to interpret EHRs for nuanced diagnostic questions, referenced to a panel of medical experts. In 490 full-text EHR notes from 125 patients with prior life-threatening heart rhythm disorders, we asked GPT-4-turbo to identify recurrent arrhythmias distinct from prior events and tested 220 563 queries. To provide context, results were compared with rule-based natural language processing and BERT-based language models. Experiments were repeated for 2 additional LLMs.

Results: In an independent hold-out set of 389 notes, GPT-4-turbo had a balanced accuracy of 64.3%±4.7% out-of-the-box at baseline. This increased when asking GPT-4-turbo to provide a rationale for its answers, requiring a structured data output, and providing in-context exemplars, rose to a balanced accuracy of 91.4%±3.8% (<0.05). This surpassed the traditional logic-based natural language processing and BERT-based models (<0.05). Results were consistent for GPT-3.5-turbo and Jurassic-2 LLMs.

Conclusions: The use of prompt engineering strategies enables LLMs to identify clinical end points from EHRs with an accuracy that surpassed natural language processing and approximated experts, yet without the need for expert knowledge. These approaches could be applied to LLM queries for other domains, to facilitate automated analysis of nuanced data sets with high accuracy by nonexperts.

Download full-text PDF

Source
http://dx.doi.org/10.1161/CIRCEP.124.013023DOI Listing

Publication Analysis

Top Keywords

natural language
8
language processing
8
language models
8
prompt engineering
8
balanced accuracy
8
engineering generative
4
generative artificial
4
artificial intelligence
4
intelligence natural
4
language
4

Similar Publications

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!