Multi-parametric prostate MRI (mpMRI) is a powerful tool to diagnose prostate cancer, though difficult to interpret even for experienced radiologists. A common radiological procedure is to compare a magnetic resonance image with similarly diagnosed cases. To assist the radiological image interpretation process, computerized Content-Based Image Retrieval systems (CBIRs) can therefore be employed to improve the reporting workflow and increase its accuracy. In this article, we propose a new, supervised siamese deep learning architecture able to handle multi-modal and multi-view MR images with similar PIRADS score. An experimental comparison with well-established deep learning-based CBIRs (namely standard siamese networks and autoencoders) showed significantly improved performance with respect to both diagnostic (ROC-AUC), and information retrieval metrics (Precision-Recall, Discounted Cumulative Gain and Mean Average Precision). Finally, the new proposed multi-view siamese network is general in design, facilitating a broad use in diagnostic medical imaging retrieval.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1109/TMI.2020.3043641 | DOI Listing |
Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!