Retrieval-Augmented Generation (RAG) pairs large language models (LLMs) with recent data to produce more accurate, context-aware outputs. By converting text into numeric embeddings, RAG locates and retrieves relevant "chunks" of data, that along with the query, ground the model's responses in current, specific information. This process helps reduce outdated or fabricated answers. In oncology, RAG has shown particular promise. Studies have demonstrated its ability to improve treatment recommendations by integrating genetic profiles, strengthened clinical trial matching through biomarker analysis, and accelerated drug development by clarifying model-driven insights. Despite its advantages, RAG depends on high-quality data. Biased or incomplete sources can lead to inaccurate outcomes. Careful implementation and human oversight are crucial for ensuring the effectiveness and reliability of RAG in oncology.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1016/j.ejca.2025.115341 | DOI Listing |
Radiol Artif Intell
March 2025
Department of Radiology & Biomedical Imaging, University of California, San Francisco (UCSF), San Francisco, Calif.
Retrieval-augmented generation (RAG) is a strategy to improve performance of large language models (LLMs) by providing the LLM with an updated corpus of knowledge that can be used for answer generation in real-time. RAG may improve LLM performance and clinical applicability in radiology by providing citable, up-to-date information without requiring model fine-tuning. In this retrospective study, a radiology-specific RAG was developed using a vector database of 3,689 articles published from January 1999 to December 2023.
View Article and Find Full Text PDFRadiol Artif Intell
March 2025
Department of Radiology, Duke University Hospital, 2301 Erwin Rd, Durham, NC 27710.
Purpose To develop and evaluate an automated system for extracting structured clinical information from unstructured radiology and pathology reports using open-weights language models (LMs) and retrieval augmented generation (RAG) and to assess the effects of model configuration variables on extraction performance. Materials and Methods This retrospective study utilized two datasets: 7,294 radiology reports annotated for Brain Tumor Reporting and Data System (BT-RADS) scores and 2,154 pathology reports annotated for mutation status (January 2017 to July 2021). An automated pipeline was developed to benchmark the performance of various LMs and RAG configurations for structured data extraction accuracy from reports.
View Article and Find Full Text PDFEur J Cancer
March 2025
Division of Data-Driven and Digital Medicine, Department of Medicine, Icahn School of Medicine at Mount Sinai, New York, NY 10029, United States. Electronic address:
Retrieval-Augmented Generation (RAG) pairs large language models (LLMs) with recent data to produce more accurate, context-aware outputs. By converting text into numeric embeddings, RAG locates and retrieves relevant "chunks" of data, that along with the query, ground the model's responses in current, specific information. This process helps reduce outdated or fabricated answers.
View Article and Find Full Text PDFJ Med Internet Res
March 2025
Center for Digital Health, University Hospital Tuebingen, Tuebingen, Germany.
Background: Molecular tumor boards (MTBs) require intensive manual investigation to generate optimal treatment recommendations for patients. Large language models (LLMs) can catalyze MTB recommendations, decrease human error, improve accessibility to care, and enhance the efficiency of precision oncology.
Objective: In this study, we aimed to investigate the efficacy of LLM-generated treatments for MTB patients.
Trends Biotechnol
March 2025
Department of Energy, Environmental, and Chemical Engineering, Washington University in St Louis, St Louis, MO 63130, USA. Electronic address:
Large language models (LLMs) are transforming synthetic biology (SynBio) education and research. In this review we cover the advancements and potential impacts of LLMs in biomanufacturing. First, we summarize recent developments and compare the capabilities of US and Chinese language models in addressing fundamental SynBio questions.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!