RadImageNet: An Open Radiologic Deep Learning Research Dataset for Effective Transfer Learning.

Radiol Artif Intell

BioMedical Engineering and Imaging Institute (X.M., Z.L., P.M.R., C.C., K.E.L., T.Y., H.G., Z.A.F., Y.Y.) and Department of Diagnostic, Interventional and Molecular Radiology (P.M.R., B.M., M.H., A.D., A.J., Z.A.F., Y.Y.), Icahn School of Medicine at Mount Sinai, Leon and Norma Hess Center for Science and Medicine, 1470 Madison Ave, New York, NY 10029; Department of Mathematics, University of Oklahoma, Norman, Okla (Y.W.); Department of Radiology, Cornell Medicine, New York, NY (T.D.); and Department of Radiology, East River Medical Imaging, New York, NY (T.D.).

Published: September 2022

Purpose: To demonstrate the value of pretraining with millions of radiologic images compared with ImageNet photographic images on downstream medical applications when using transfer learning.

Materials And Methods: This retrospective study included patients who underwent a radiologic study between 2005 and 2020 at an outpatient imaging facility. Key images and associated labels from the studies were retrospectively extracted from the original study interpretation. These images were used for RadImageNet model training with random weight initiation. The RadImageNet models were compared with ImageNet models using the area under the receiver operating characteristic curve (AUC) for eight classification tasks and using Dice scores for two segmentation problems.

Results: The RadImageNet database consists of 1.35 million annotated medical images in 131 872 patients who underwent CT, MRI, and US for musculoskeletal, neurologic, oncologic, gastrointestinal, endocrine, abdominal, and pulmonary pathologic conditions. For transfer learning tasks on small datasets-thyroid nodules (US), breast masses (US), anterior cruciate ligament injuries (MRI), and meniscal tears (MRI)-the RadImageNet models demonstrated a significant advantage ( < .001) to ImageNet models (9.4%, 4.0%, 4.8%, and 4.5% AUC improvements, respectively). For larger datasets-pneumonia (chest radiography), COVID-19 (CT), SARS-CoV-2 (CT), and intracranial hemorrhage (CT)-the RadImageNet models also illustrated improved AUC ( < .001) by 1.9%, 6.1%, 1.7%, and 0.9%, respectively. Additionally, lesion localizations of the RadImageNet models were improved by 64.6% and 16.4% on thyroid and breast US datasets, respectively.

Conclusion: RadImageNet pretrained models demonstrated better interpretability compared with ImageNet models, especially for smaller radiologic datasets. CT, MR Imaging, US, Head/Neck, Thorax, Brain/Brain Stem, Evidence-based Medicine, Computer Applications-General (Informatics) Published under a CC BY 4.0 license.See also the commentary by Cadrin-Chênevert in this issue.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC9530758PMC
http://dx.doi.org/10.1148/ryai.210315DOI Listing

Publication Analysis

Top Keywords

radimagenet models
16
compared imagenet
12
imagenet models
12
radimagenet
8
transfer learning
8
patients underwent
8
models
8
models demonstrated
8
images
5
radimagenet open
4

Similar Publications

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!