Selecting the "right" amount of information to include in a summary is a difficult task. A good summary should be detailed and entity-centric without being overly dense and hard to follow. To better understand this tradeoff, we solicit increasingly dense GPT-4 summaries with what we refer to as a "Chain of Density" (CoD) prompt. Specifically, GPT-4 generates an initial entity-sparse summary before iteratively incorporating missing salient entities without increasing the length. Summaries generated by CoD are more abstractive, exhibit more fusion, and have less of a lead bias than GPT-4 summaries generated by a vanilla prompt. We conduct a human preference study on 100 CNN DailyMail articles and find that humans prefer GPT-4 summaries that are more dense than those generated by a vanilla prompt and almost as dense as human written summaries. Qualitative analysis supports the notion that there exists a tradeoff between informativeness and readability. 500 annotated CoD summaries, as well as an extra 5,000 unannotated summaries, are freely available on HuggingFace.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC11419567 | PMC |
http://dx.doi.org/10.18653/v1/2023.newsum-1.7 | DOI Listing |
J Vasc Interv Radiol
January 2025
Associate Professor of Radiology and Imaging Sciences, Division of Interventional Radiology and Image-Guided Medicine, Emory University School of Medicine, Atlanta, GA.
This study assesses the feasibility of Large Language Models like GPT-4 (OpenAI, San Francisco, CA, USA) to summarize interventional radiology (IR) procedural reports to improve layperson understanding and translate medical texts into multiple languages. 200 reports from eight categories were summarized using GPT-4. Readability was assessed with Flesch-Kincaid Reading Level (FKRL) and Flesch Reading Ease Score (FRES).
View Article and Find Full Text PDFJMIR Med Inform
January 2025
Department of Science and Education, Shenzhen Baoan Women's and Children's Hospital, Shenzhen, China.
Background: Large language models (LLMs) have been proposed as valuable tools in medical education and practice. The Chinese National Nursing Licensing Examination (CNNLE) presents unique challenges for LLMs due to its requirement for both deep domain-specific nursing knowledge and the ability to make complex clinical decisions, which differentiates it from more general medical examinations. However, their potential application in the CNNLE remains unexplored.
View Article and Find Full Text PDFJ Am Med Inform Assoc
December 2024
Department of Radiology, Stanford University, Stanford, CA 94304, United States.
Objective: Brief hospital course (BHC) summaries are clinical documents that summarize a patient's hospital stay. While large language models (LLMs) depict remarkable capabilities in automating real-world tasks, their capabilities for healthcare applications such as synthesizing BHCs from clinical notes have not been shown. We introduce a novel preprocessed dataset, the MIMIC-IV-BHC, encapsulating clinical note and BHC pairs to adapt LLMs for BHC synthesis.
View Article and Find Full Text PDFSci Rep
January 2025
Department of Biomedical Informatics, Harvard Medical School, Boston, MA, 02115, USA.
Continuous glucose monitors (CGM) provide valuable insights about glycemic control that aid in diabetes management. However, interpreting metrics and charts and synthesizing them into linguistic summaries is often non-trivial for patients and providers. The advent of large language models (LLMs) has enabled real-time text generation and summarization of medical data.
View Article and Find Full Text PDFJ Med Internet Res
January 2025
Department of Biomedical Informatics, School of Medicine, Emory University, Atlanta, GA, United States.
Background: The increasing use of social media to share lived and living experiences of substance use presents a unique opportunity to obtain information on side effects, use patterns, and opinions on novel psychoactive substances. However, due to the large volume of data, obtaining useful insights through natural language processing technologies such as large language models is challenging.
Objective: This paper aims to develop a retrieval-augmented generation (RAG) architecture for medical question answering pertaining to clinicians' queries on emerging issues associated with health-related topics, using user-generated medical information on social media.
Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!