Background: Large language models (LLMs) excel at answering knowledge-based questions. Many aspects of blood banking and transfusion medicine involve no direct patient care and require only knowledge and judgment. We hypothesized that public LLMs could perform such tasks with accuracy and precision.

Study Design And Methods: We presented three sets of tasks to three publicly-available LLMs (Bard, GPT-3.5, and GPT-4). The first was to review short case presentations and then decide if a red blood cell transfusion was indicated. The second task was to answer a set of consultation questions common in clinical transfusion practice. The third task was to take a multiple-choice test experimentally validated to assess internal medicine postgraduate knowledge of transfusion practice (the BEST-TEST).

Results: In the first task, the area under the receiver operating characteristic curve for correct transfusion decisions was 0.65, 0.90, and 0.92, respectively for Bard, GPT-3.5 and GPT-4. All three models had a modest rate of acceptable responses to the consultation questions. Average scores on the BEST-TEST were 55%, 40%, and 87%, respectively.

Conclusion: When presented with transfusion medicine tasks in natural language, publicly available LLMs demonstrated a range of ability, but GPT-4 consistently scored very well in all tasks. Research is needed to assess the utility of LLMs in transfusion medicine practice. Transfusion Medicine physicians should consider their role alongside such technologies, and how they might be used for the benefit and safety of patients.

Download full-text PDF

Source
http://dx.doi.org/10.1111/trf.17526DOI Listing

Publication Analysis

Top Keywords

transfusion medicine
20
transfusion
9
large language
8
bard gpt-35
8
gpt-35 gpt-4
8
consultation questions
8
transfusion practice
8
medicine
6
llms
5
doctors dream
4

Similar Publications

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!