Background: Injuries pose a significant global health challenge due to their high incidence and mortality rates. Although injury surveillance is essential for prevention, it is resource-intensive. This study aimed to develop and validate locally deployable large language models (LLMs) to extract core injury-related information from Emergency Department (ED) clinical notes.
Methods: We conducted a diagnostic study using retrospectively collected data from January 2014 to December 2020 from two urban academic tertiary hospitals. One served as the derivation cohort and the other as the external test cohort. Adult patients presenting to the ED with injury-related complaints were included. Primary outcomes included classification accuracies for information extraction tasks related to injury mechanism, place of occurrence, activity, intent, and severity. We fine-tuned a single generalizable Llama-2 model and five distinct Bidirectional Encoder Representations from Transformers (BERT) models for each task to extract information from initial ED physician notes. The Llama-2 model was able to perform different tasks by modifying the instruction prompt. Data recorded in injury registries provided the gold standard labels. Model performance was assessed using accuracy and macro-average F1 scores.
Results: The derivation and external test cohorts comprised 36,346 and 32,232 patients, respectively. In the derivation cohort's test set, the Llama-2 model achieved accuracies (95% confidence intervals) of 0.899 (0.889-0.909) for injury mechanism, 0.774 (0.760-0.789) for place of occurrence, 0.679 (0.665-0.694) for activity, 0.972 (0.967-0.977) for intent, and 0.935 (0.926-0.943) for severity. The Llama-2 model outperformed the BERT models in accuracy and macro-average F1 scores across all tasks in both cohorts. Imposing constraints on the Llama-2 model to avoid uncertain predictions further improved its accuracy.
Conclusion: Locally deployable LLMs, trained to extract core injury-related information from free-text ED clinical notes, demonstrated good performance. Generative LLMs can serve as versatile solutions for various injury-related information extraction tasks.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC11611659 | PMC |
http://dx.doi.org/10.3346/jkms.2024.39.e291 | DOI Listing |
Sci Rep
December 2024
Earth & Environmental Sciences Division, Los Alamos National Laboratory, Los Alamos, NM, 87544, USA.
To reduce environmental risks and impacts from orphaned wells (abandoned oil and gas wells), it is essential to first locate and then plug these wells. Manual reading and digitizing of information from historical documents is not feasible, given the large number of wells. Here, we propose a new computational approach for rapidly and cost-effectively characterizing these wells.
View Article and Find Full Text PDFJpn J Radiol
December 2024
MRI Unit, Radiology Department, HT Medica. Carmelo Torres nº2, 23007, Jaén, Spain.
Background And Objective: Structured reports in radiology have demonstrated substantial advantages over unstructured ones. However, the transition from unstructured to structured reporting can face challenges, as experienced radiologists worry about the potential loss of valuable information. In this study, we fine-tuned the Llama 2 model capable of generating structured pituitary MRI reports from unstructured reports.
View Article and Find Full Text PDFJ Vis Exp
December 2024
Department of Surgery, Division of Surgical Oncology, Medical College of Wisconsin;
Large language models (LLMs) have emerged as a popular resource for generating information relevant to a user query. Such models are created through a resource-intensive training process utilizing an extensive, static corpus of textual data. This static nature results in limitations for adoption in domains with rapidly changing knowledge, proprietary information, and sensitive data.
View Article and Find Full Text PDFJ Am Med Inform Assoc
December 2024
CEIEC, Universidad Francisco de Vitoria, Pozuelo de Alarcón, 28223 Madrid, Spain.
Objectives: We evaluate the effectiveness of large language models (LLMs), specifically GPT-based (GPT-3.5 and GPT-4) and Llama-2 models (13B and 7B architectures), in autonomously assessing clinical records (CRs) to enhance medical education and diagnostic skills.
Materials And Methods: Various techniques, including prompt engineering, fine-tuning (FT), and low-rank adaptation (LoRA), were implemented and compared on Llama-2 7B.
Behav Res Methods
December 2024
Department of Linguistics and Modern Languages, The Chinese University of Hong Kong, Hong Kong, China.
The study of large language models (LLMs) and LLM-powered chatbots has gained significant attention in recent years, with researchers treating LLMs as participants in psychological experiments. To facilitate this research, we developed an R package called "MacBehaviour " ( https://github.com/xufengduan/MacBehaviour ), which interacts with over 100 LLMs, including OpenAI's GPT family, the Claude family, Gemini, Llama family, and other open-weight models.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!