Semisupervised Training of a Brain MRI Tumor Detection Model Using Mined Annotations.

Radiology

From the Departments of Radiology (N.C.S., V.Y., Y.R.C., D.C.G., J.T., V.H., S.S.H., S.K., J.L., K.J., A.I.H., R.J.Y.), Radiation Oncology (J.T.Y.), Neurosurgery (N.M.), Neurology (J.S.), and Epidemiology and Biostatistics, Division of Computational Oncology, (K.P., J.G., S.P.S.), Memorial Sloan Kettering Cancer Center, 1275 York Ave, New York, NY 10065; Weill Cornell Medical College, New York, NY (J.K.).

Published: April 2022

Background Artificial intelligence (AI) applications for cancer imaging conceptually begin with automated tumor detection, which can provide the foundation for downstream AI tasks. However, supervised training requires many image annotations, and performing dedicated post hoc image labeling is burdensome and costly. Purpose To investigate whether clinically generated image annotations can be data mined from the picture archiving and communication system (PACS), automatically curated, and used for semisupervised training of a brain MRI tumor detection model. Materials and Methods In this retrospective study, the cancer center PACS was mined for brain MRI scans acquired between January 2012 and December 2017 and included all annotated axial T1 postcontrast images. Line annotations were converted to boxes, excluding boxes shorter than 1 cm or longer than 7 cm. The resulting boxes were used for supervised training of object detection models using RetinaNet and Mask region-based convolutional neural network (R-CNN) architectures. The best-performing model trained from the mined data set was used to detect unannotated tumors on training images themselves (self-labeling), automatically correcting many of the missing labels. After self-labeling, new models were trained using this expanded data set. Models were scored for precision, recall, and F using a held-out test data set comprising 754 manually labeled images from 100 patients (403 intra-axial and 56 extra-axial enhancing tumors). Model F scores were compared using bootstrap resampling. Results The PACS query extracted 31 150 line annotations, yielding 11 880 boxes that met inclusion criteria. This mined data set was used to train models, yielding F scores of 0.886 for RetinaNet and 0.908 for Mask R-CNN. Self-labeling added 18 562 training boxes, improving model F scores to 0.935 ( < .001) and 0.954 ( < .001), respectively. Conclusion The application of semisupervised learning to mined image annotations significantly improved tumor detection performance, achieving an excellent F score of 0.954. This development pipeline can be extended for other imaging modalities, repurposing unused data silos to potentially enable automated tumor detection across radiologic modalities. © RSNA, 2022

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC8962822PMC
http://dx.doi.org/10.1148/radiol.210817DOI Listing

Publication Analysis

Top Keywords

tumor detection
20
data set
16
brain mri
12
image annotations
12
semisupervised training
8
training brain
8
mri tumor
8
detection model
8
automated tumor
8
supervised training
8

Similar Publications

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!