A PHP Error was encountered

Severity: Warning

Message: file_get_contents(https://...@pubfacts.com&api_key=b8daa3ad693db53b1410957c26c9a51b4908&a=1): Failed to open stream: HTTP request failed! HTTP/1.1 429 Too Many Requests

Filename: helpers/my_audit_helper.php

Line Number: 197

Backtrace:

File: /var/www/html/application/helpers/my_audit_helper.php
Line: 197
Function: file_get_contents

File: /var/www/html/application/helpers/my_audit_helper.php
Line: 271
Function: simplexml_load_file_from_url

File: /var/www/html/application/helpers/my_audit_helper.php
Line: 1057
Function: getPubMedXML

File: /var/www/html/application/helpers/my_audit_helper.php
Line: 3175
Function: GetPubMedArticleOutput_2016

File: /var/www/html/application/controllers/Detail.php
Line: 575
Function: pubMedSearch_Global

File: /var/www/html/application/controllers/Detail.php
Line: 489
Function: pubMedGetRelatedKeyword

File: /var/www/html/index.php
Line: 316
Function: require_once

Open-Weight Language Models and Retrieval Augmented Generation for Automated Structured Data Extraction from Diagnostic Reports: Assessment of Approaches and Parameters. | LitMetric

Purpose To develop and evaluate an automated system for extracting structured clinical information from unstructured radiology and pathology reports using open-weights language models (LMs) and retrieval augmented generation (RAG) and to assess the effects of model configuration variables on extraction performance. Materials and Methods This retrospective study utilized two datasets: 7,294 radiology reports annotated for Brain Tumor Reporting and Data System (BT-RADS) scores and 2,154 pathology reports annotated for mutation status (January 2017 to July 2021). An automated pipeline was developed to benchmark the performance of various LMs and RAG configurations for structured data extraction accuracy from reports. The impact of model size, quantization, prompting strategies, output formatting, and inference parameters on model accuracy was systematically evaluated. Results The best performing models achieved up to 98% accuracy in extracting BT-RADS scores from radiology reports and over 90% for mutation status extraction from pathology reports. The best model was medical finetuned llama3. Larger, newer, and domain fine-tuned models consistently outperformed older and smaller models (mean accuracy, 86% versus 75%; < .001). Model quantization had minimal impact on performance. Few-shot prompting significantly improved accuracy (mean increase: 32% ± 32%, = .02). RAG improved performance for complex pathology reports +48% ± 11% ( = .001), but not for shorter radiology reports-8% ± 31% ( = .39). Conclusion This study demonstrates the potential of open LMs in automated extraction of structured clinical data from unstructured clinical reports with local privacy-preserving application. Careful model selection, prompt engineering, and semiautomated optimization using annotated data are critical for optimal performance. ©RSNA, 2025.

Download full-text PDF

Source
http://dx.doi.org/10.1148/ryai.240551DOI Listing

Publication Analysis

Top Keywords

pathology reports
16
reports
9
language models
8
retrieval augmented
8
augmented generation
8
structured data
8
data extraction
8
structured clinical
8
radiology reports
8
reports annotated
8

Similar Publications

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!