We investigate the impact of various deep-learning-based methods for detecting and segmenting metastases with different lesion volume sizes on 3D brain MR images. A 2.5D U-Net and a 3D U-Net were selected. We also evaluated weak learner fusion of the prediction features generated by the 2.5D and the 3D networks. A 3D fully convolutional one-stage (FCOS) detector was selected as a representative of bounding-box regression-based detection methods. A total of 422 3D post-contrast T1-weighted scans from patients with brain metastases were used. Performances were analyzed based on lesion volume, total metastatic volume per patient, and number of lesions per patient. The performance of detection of the 2.5D and 3D U-Net methods had recall of and precision of for lesion volume but deteriorated as metastasis size decreased below to 0.58 to 0.74 in recall and 0.16 to 0.25 in precision. Compared the two U-Nets for detection capability, high precision was achieved by the 2.5D network, but high recall was achieved by the 3D network for all lesion sizes. The weak learner fusion achieved a balanced performance between the 2.5D and 3D U-Nets; particularly, it increased precision to 0.83 for lesion volumes of 0.1 to but decreased recall to 0.59. The 3D FCOS detector did not outperform the U-Net methods in detecting either the small or large metastases presumably because of the limited data size. Our study provides the performances of four deep learning methods in relationship to lesion size, total metastasis volume, and number of lesions per patient, providing insight into further development of the deep learning networks.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC8140611 | PMC |
http://dx.doi.org/10.1117/1.JMI.8.3.037001 | DOI Listing |
Int J Comput Assist Radiol Surg
January 2025
Medical Informatics, University of Lübeck, Ratzeburger Allee 160, 23562, Lübeck, Germany.
Purpose: Semantic segmentation and landmark detection are fundamental tasks of medical image processing, facilitating further analysis of anatomical objects. Although deep learning-based pixel-wise classification has set a new-state-of-the-art for segmentation, it falls short in landmark detection, a strength of shape-based approaches.
Methods: In this work, we propose a dense image-to-shape representation that enables the joint learning of landmarks and semantic segmentation by employing a fully convolutional architecture.
Neurosurg Rev
January 2025
Department of Neurosurgery, Mount Sinai Hospital, Icahn School of Medicine, New York City, NY, USA.
Currently, the World Health Organization (WHO) grade of meningiomas is determined based on the biopsy results. Therefore, accurate non-invasive preoperative grading could significantly improve treatment planning and patient outcomes. Considering recent advances in machine learning (ML) and deep learning (DL), this meta-analysis aimed to evaluate the performance of these models in predicting the WHO meningioma grade using imaging data.
View Article and Find Full Text PDFJ Imaging Inform Med
January 2025
Department of Anesthesiology, E-Da Cancer Hospital, I-Shou University, Kaohsiung, Taiwan.
Parkinson's disease (PD), a degenerative disorder of the central nervous system, is commonly diagnosed using functional medical imaging techniques such as single-photon emission computed tomography (SPECT). In this study, we utilized two SPECT data sets (n = 634 and n = 202) from different hospitals to develop a model capable of accurately predicting PD stages, a multiclass classification task. We used the entire three-dimensional (3D) brain images as input and experimented with various model architectures.
View Article and Find Full Text PDFJ Imaging Inform Med
January 2025
Computer Science Department, University of Geneva, Geneva, Switzerland.
Accurate wound segmentation is crucial for the precise diagnosis and treatment of various skin conditions through image analysis. In this paper, we introduce a novel dual attention U-Net model designed for precise wound segmentation. Our proposed architecture integrates two widely used deep learning models, VGG16 and U-Net, incorporating dual attention mechanisms to focus on relevant regions within the wound area.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!