The quantification of new or enlarged lesions from follow-up MRI scans is an important surrogate of clinical disease activity in patients with multiple sclerosis (MS). Not only is manual segmentation time consuming, but inter-rater variability is high. Currently, only a few fully automated methods are available. We address this gap in the field by employing a 3D convolutional neural network (CNN) with encoder-decoder architecture for fully automatic longitudinal lesion segmentation. Input data consist of two fluid attenuated inversion recovery (FLAIR) images (baseline and follow-up) per patient. Each image is entered into the encoder and the feature maps are concatenated and then fed into the decoder. The output is a 3D mask indicating new or enlarged lesions (compared to the baseline scan). The proposed method was trained on 1809 single point and 1444 longitudinal patient data sets and then validated on 185 independent longitudinal data sets from two different scanners. From the two validation data sets, manual segmentations were available from three experienced raters, respectively. The performance of the proposed method was compared to the open source Lesion Segmentation Toolbox (LST), which is a current state-of-art longitudinal lesion segmentation method. The mean lesion-wise inter-rater sensitivity was 62%, while the mean inter-rater number of false positive (FP) findings was 0.41 lesions per case. The two validated algorithms showed a mean sensitivity of 60% (CNN), 46% (LST) and a mean FP of 0.48 (CNN), 1.86 (LST) per case. Sensitivity and number of FP were not significantly different (p < 0.05) between the CNN and manual raters. New or enlarged lesions counted by the CNN algorithm appeared to be comparable with manual expert ratings. The proposed algorithm seems to outperform currently available approaches, particularly LST. The high inter-rater variability in case of manual segmentation indicates the complexity of identifying new or enlarged lesions. An automated CNN-based approach can quickly provide an independent and deterministic assessment of new or enlarged lesions from baseline to follow-up scans with acceptable reliability.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7554211 | PMC |
http://dx.doi.org/10.1016/j.nicl.2020.102445 | DOI Listing |
Insights Imaging
January 2025
Medical Research Department, Qingdao Hospital, University of Health and Rehabilitation Sciences (Qingdao Municipal Hospital), Qingdao, P. R. China.
Objective: To develop an automatic segmentation model to delineate the adnexal masses and construct a machine learning model to differentiate between low malignant risk and intermediate-high malignant risk of adnexal masses based on ovarian-adnexal reporting and data system (O-RADS).
Methods: A total of 663 ultrasound images of adnexal mass were collected and divided into two sets according to experienced radiologists: a low malignant risk set (n = 446) and an intermediate-high malignant risk set (n = 217). Deep learning segmentation models were trained and selected to automatically segment adnexal masses.
Epilepsia
January 2025
Department of Neurosurgery, Beijing Tiantan Hospital, Capital Medical University, Beijing, China.
Objective: To evaluate iron deposition patterns in patients with cerebral cavernous malformation-related epilepsy (CRE) using quantitative susceptibility mapping (QSM) for detailed analysis of iron distribution associated with a history of epilepsy and severity.
Methods: This study is part of the Quantitative Susceptibility Biomarker and Brain Structural Property for Cerebral Cavernous Malformation Related Epilepsy (CRESS) cohort, a prospective multicenter study. QSM was used to quantify iron deposition in patients with sporadic cerebral cavernous malformation (CCMs).
Taiwan J Ophthalmol
December 2024
Shri Bhagwan Mahavir Vitreoretinal Services, Medical Research Foundation, Sankara Nethralaya, Chennai, Tamil Nadu, India.
The aim of this study is to describe genotype and phenotype of patients with bestrophinopathy. The case records were reviewed retrospectively, findings of multimodal imaging such as color fundus photograph, optical coherence tomography (OCT), fundus autofluorescence, electrophysiological, and genetic tests were noted. Twelve eyes of six patients from distinct Indian families with molecular diagnosis were enrolled.
View Article and Find Full Text PDFEur J Radiol Open
June 2025
Department of Nuclear medicine, The Second Affiliated Hospital, Guangzhou Medical University, Guangzhou, China.
Objectives: To develop and validate a deep learning model using multimodal PET/CT imaging for detecting and classifying focal liver lesions (FLL).
Methods: This study included 185 patients who underwent F-FDG PET/CT imaging at our institution from March 2022 to February 2023. We analyzed serological data and imaging.
Prz Gastroenterol
September 2024
Department of Surgery, General University Hospital of Patras, Patras, Greece.
Artificial intelligence (AI) and image processing are revolutionising the diagnosis and management of liver cancer. Recent advancements showcase AI's ability to analyse medical imaging data, like computed tomography scans and magnetic resonance imaging, accurately detecting and classifying liver cancer lesions for early intervention. Predictive models aid prognosis estimation and recurrence pattern identification, facilitating personalised treatment planning.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!