Objectives: Large Language Models (LLMs) have been proposed as a solution to address high volumes of Patient Medical Advice Requests (PMARs). This study addresses whether LLMs can generate high quality draft responses to PMARs that satisfies both patients and clinicians with prompt engineering.

Materials And Methods: We designed a novel human-involved iterative processes to train and validate prompts to LLM in creating appropriate responses to PMARs. GPT-4 was used to generate response to the messages. We updated the prompts, and evaluated both clinician and patient acceptance of LLM-generated draft responses at each iteration, and tested the optimized prompt on independent validation data sets. The optimized prompt was implemented in the electronic health record production environment and tested by 69 primary care clinicians.

Results: After 3 iterations of prompt engineering, physician acceptance of draft suitability increased from 62% to 84% (P <.001) in the validation dataset (N = 200), and 74% of drafts in the test dataset were rated as "helpful." Patients also noted significantly increased favorability of message tone (78%) and overall quality (80%) for the optimized prompt compared to the original prompt in the training dataset, patients were unable to differentiate human and LLM-generated draft PMAR responses for 76% of the messages, in contrast to the earlier preference for human-generated responses. Majority (72%) of clinicians believed it can reduce cognitive load in dealing with InBasket messages.

Discussion And Conclusion: Informed by clinician and patient feedback synergistically, tuning in LLM prompt alone can be effective in creating clinically relevant and useful draft responses to PMARs.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC11413421PMC
http://dx.doi.org/10.1093/jamia/ocae172DOI Listing

Publication Analysis

Top Keywords

prompt engineering
8
large language
8
language models
8
draft responses
8
responses pmars
8
optimized prompt
8
prompt
5
engineering leveraging
4
leveraging large
4
models generating
4

Similar Publications

Today, there are environmental problems all over the world due to the emission of greenhouse gasses caused by the combustion of diesel fuel. The excessive consumption and drastic reduction of fossil fuels have prompted the leaders of various countries, including Iran, to put the use of alternative and clean energy sources on the agenda. In recent years, the use of biofuels and the addition of nanoparticles to diesel fuel have reduced pollutant emissions, improved the environment, and enhanced the physicochemical properties of the fuel.

View Article and Find Full Text PDF

Event co-occurrences for prompt-based generative event argument extraction.

Sci Rep

December 2024

School of Computer Science and Technology (School of Cyberspace Security), Xinjiang University, Urumqi, 830046, China.

Recent works have introduced prompt learning for Event Argument Extraction (EAE) since prompt-based approaches transform downstream tasks into a more consistent format with the training task of Pre-trained Language Model (PLM). This helps bridge the gap between downstream tasks and model training. However, these previous works overlooked the complex number of events and their relationships within sentences.

View Article and Find Full Text PDF

Inclusive AI for radiology: Optimising ChatGPT-4 with advanced prompt engineering.

Clin Imaging

December 2024

Faculty of Dentistry, Jamia Millia Islamia, New Delhi, India.

This letter responds to the article "Encouragement vs. liability: How prompt engineering influences ChatGPT-4's radiology exam performance," offering additional perspectives on optimising ChatGPT-4 for Radiology applications. While the study highlights the significance of prompt engineering, we suggest that addressing additional key challenges such as age-related diagnostic needs, socio-economic diversity, data security, and liability concerns is essential for responsible AI integration.

View Article and Find Full Text PDF

Lies are ubiquitous and often happen in social interactions. However, socially conducted deceptions make it hard to get data since people are unlikely to self-report their intentional deception behaviors, especially malicious ones. Social deduction games, a type of social game where deception is a key gameplay mechanic, can be a good alternative to studying social deceptions.

View Article and Find Full Text PDF

Coronary artery disease (CAD) is the main cause of death. It is a complex heart disease that is linked with many risk factors and a variety of symptoms. In the past few years, CAD has experienced a remarkable growth.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!