AI Article Synopsis

  • Radiology reports have important information about patients, often describing where things are in the body.
  • The study aims to automatically find these location-based words (called spatial expressions) in different types of radiology reports.
  • They created a new method using deep learning that is better at identifying these terms than older techniques, showing improvement in results.

Article Abstract

Radiology reports contain important clinical information about patients which are often tied through spatial expressions. Spatial expressions (or triggers) are mainly used to describe the positioning of radiographic findings or medical devices with respect to some anatomical structures. As the expressions result from the mental visualization of the radiologist's interpretations, they are varied and complex. The focus of this work is to automatically identify the spatial expression terms from three different radiology sub-domains. We propose a hybrid deep learning-based NLP method that includes - 1) generating a set of candidate spatial triggers by exact match with the known trigger terms from the training data, 2) applying domain-specific constraints to filter the candidate triggers, and 3) utilizing a BERT-based classifier to predict whether a candidate trigger is a true spatial trigger or not. The results are promising, with an improvement of 24 points in the average F1 measure compared to a standard BERT-based sequence labeler.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7744270PMC
http://dx.doi.org/10.18653/v1/2020.splu-1.6DOI Listing

Publication Analysis

Top Keywords

hybrid deep
8
spatial trigger
8
radiology reports
8
spatial expressions
8
spatial
6
deep learning
4
learning approach
4
approach spatial
4
trigger
4
trigger extraction
4

Similar Publications

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!