The prevalence of childhood obesity has increased significantly worldwide, highlighting a need for accurate noninvasive quantification of body fat distribution in children. The purpose of this study was to develop and test an automated deep learning method for subcutaneous adipose tissue (SAT) and visceral adipose tissue (VAT) segmentation using Dixon MRI acquisitions in adolescents. This study was embedded within the Generation R Study, a prospective population-based cohort study in Rotterdam, The Netherlands. The current study included 2989 children (1432 boys, 1557 girls; mean age, 13.5 years) who underwent investigational whole-body Dixon MRI after reaching the age of 13 years during the follow-up phase of the Generation R Study. A 2D competitive dense fully convolutional neural network model (2D-CDFNet) was trained from scratch to segment abdominal SAT and VAT using Dixon MRI-based images. The model underwent training, validation, and testing in 62, eight, and 15 children, respectively, who were selected by stratified random sampling, with manual segmentations used as reference. Segmentation performance was assessed using the Dice similarity coefficient and volumetric similarity. Two observers independently performed subjective visual assessments of automated segmentations in 504 children, selected by stratified random sampling, with undersegmentation and oversegmentation scored on a scale of 0-3 (with a score of 3 denoting nearly perfect segmentation). For 2820 children for whom complete data were available, Spearman correlation coefficients were computed among MRI measurements and BMI and dual-energy x-ray absorptiometry (DEXA)-based measurements. The model used (gitlab.com/radiology/msk/genr/abdomen/cdfnet) is publicly available. In the test dataset, the mean Dice similarity coefficient and mean volu-metric similarity, respectively, were 0.94 ± 0.03 [SD] and 0.98 ± 0.01 [SD] for SAT and 0.85 ± 0.05 and 0.92 ± 0.04 for VAT. The two observers assigned a score of 3 for SAT in 94% and 93% for the undersegmentation proportion and in 99% and 99% for the oversegmentation proportion, and they assigned a score of 3 for VAT in 99% and 99% for the undersegmentation proportion and in 95% and 97% for the oversegmentation proportion. Correlations with SAT and VAT were 0.808 and 0.698 for BMI and 0.941 and 0.801 for DEXA-derived fat mass. We trained and evaluated the 2D-CDFNet model on Dixon MRI in adolescents. Quantitative and qualitative measures of automated SAT and VAT segmentations indicated strong model performance. The automated model may facilitate large-scale studies investigating abdominal fat distribution on MRI among adolescents as well as associations of fat distribution with clinical outcomes.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.2214/AJR.23.29570 | DOI Listing |
Korean J Radiol
January 2025
Department of Diagnostic and Interventional Radiology, University Medical Center Freiburg, Faculty of Medicine, University of Freiburg, Freiburg, Germany.
Objective: The aim of this study was to compare image quality features and lesion characteristics between a faster deep learning (DL) reconstructed T2-weighted (T2-w) fast spin-echo (FSE) Dixon sequence with super-resolution (T2) and a conventional T2-w FSE Dixon sequence (T2) for breast magnetic resonance imaging (MRI).
Materials And Methods: This prospective study was conducted between November 2022 and April 2023 using a 3T scanner. Both T2 and T2 sequences were acquired for each patient.
Magn Reson Imaging
January 2025
Department of Radiology, Affiliated Hospital of Shaanxi University of Chinese Medicine, Xianyang, Shaanxi, China. Electronic address:
Objective: To develop a novel combined nomogram based on 3D multi-echo Dixon (qDixon), magnetization transfer imaging (MTI) and clinical risk factors for the diagnosis of osteoporosis.
Materials And Methods: A total of 287 subjects who underwent MR examination with qDixon and MTI sequences participated in this study. These participants were randomly assigned to a training cohort and a validation cohort at a ratio of 7:3.
Insights Imaging
January 2025
Department of Radiology, Zhongshan Hospital, Fudan University, Shanghai, China.
Purpose: This study compares the diagnostic efficacy of non-contrast abbreviated MRI protocols with Gadoxetic acid-enhanced abbreviated MRI for detecting colorectal liver metastasis (CRLM), focusing on lesion characterization and surveillance.
Methods: Ninety-four patients, including 55 with pathologically verified CRLM, were enrolled, totaling 422 lesions (287 metastatic, 135 benign). Two independent readers assessed three MRI protocols per patient: Protocol 1 included non-contrast sequences (T2-weighted turbo spin-echo, T1-weighted Dixon, diffusion-weighted imaging (DWI), and ADC mapping).
Muscle Nerve
December 2024
Copenhagen Neuromuscular Center, Department of Neurology, Rigshospitalet, University of Copenhagen, Copenhagen, Denmark.
Introduction/aims: Primary hypokalemic periodic paralysis (HypoPP) can present with periodic paralysis and/or permanent muscle weakness. Permanent weakness is accompanied by fat replacement of the muscle. It is unknown whether the permanent muscle weakness is solely due to fat replacement or if other factors affect the ability of the remaining muscle fibers to contract.
View Article and Find Full Text PDFMuscle Nerve
December 2024
AMRA Medical AB, Linköping, Sweden.
Introduction/aims: Improved methodologies to monitor the progression of Duchenne muscular dystrophy (DMD) are needed, especially in the context of clinical trials. We report changes in muscle magnetic resonance imaging (MRI) parameters in participants with DMD, including changes in lean muscle volume (LMV), muscle fat fraction (MFF), and muscle fat infiltration (MFI) and their relationship to changes in functional parameters.
Methods: MRI data were obtained as part of a clinical study (NCT02310763) of domagrozumab, an antibody-targeting myostatin that negatively regulates skeletal muscle mass.
Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!