AI Article Synopsis

  • - The systematic review examines the use of natural language processing (NLP) in analyzing radiology reports, emphasizing the need for transparent methodologies to enable comparisons and reproducibility across studies.
  • - It analyzed 164 studies published between January 2015 and October 2019, finding that most focused on disease classification (28%) and diagnostic surveillance (27.4%), primarily using English reports from various imaging modalities, with oncology being the most common disease area.
  • - The review highlights issues such as inadequate reporting on essential factors like dataset preparation and validation, with only a small percentage providing details on external validation and data/code availability, suggesting a need for improved reporting standards in NLP research.

Article Abstract

Background: Automated language analysis of radiology reports using natural language processing (NLP) can provide valuable information on patients' health and disease. With its rapid development, NLP studies should have transparent methodology to allow comparison of approaches and reproducibility. This systematic review aims to summarise the characteristics and reporting quality of studies applying NLP to radiology reports.

Methods: We searched Google Scholar for studies published in English that applied NLP to radiology reports of any imaging modality between January 2015 and October 2019. At least two reviewers independently performed screening and completed data extraction. We specified 15 criteria relating to data source, datasets, ground truth, outcomes, and reproducibility for quality assessment. The primary NLP performance measures were precision, recall and F1 score.

Results: Of the 4,836 records retrieved, we included 164 studies that used NLP on radiology reports. The commonest clinical applications of NLP were disease information or classification (28%) and diagnostic surveillance (27.4%). Most studies used English radiology reports (86%). Reports from mixed imaging modalities were used in 28% of the studies. Oncology (24%) was the most frequent disease area. Most studies had dataset size > 200 (85.4%) but the proportion of studies that described their annotated, training, validation, and test set were 67.1%, 63.4%, 45.7%, and 67.7% respectively. About half of the studies reported precision (48.8%) and recall (53.7%). Few studies reported external validation performed (10.8%), data availability (8.5%) and code availability (9.1%). There was no pattern of performance associated with the overall reporting quality.

Conclusions: There is a range of potential clinical applications for NLP of radiology reports in health services and research. However, we found suboptimal reporting quality that precludes comparison, reproducibility, and replication. Our results support the need for development of reporting standards specific to clinical NLP studies.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC8487512PMC
http://dx.doi.org/10.1186/s12880-021-00671-8DOI Listing

Publication Analysis

Top Keywords

radiology reports
24
nlp radiology
16
studies
13
reporting quality
12
nlp
9
natural language
8
language processing
8
systematic review
8
nlp studies
8
clinical applications
8

Similar Publications

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!