Purpose: Adaptive radiotherapy requires auto-segmentation in patients with head and neck (HN) cancer. In the current study, we propose an auto-segmentation model using a generative adversarial network (GAN) on magnetic resonance (MR) images of HN cancer for MR-guided radiotherapy (MRgRT).
Material And Methods: In the current study, we used a dataset from the American Association of Physicists in Medicine MRI Auto-Contouring (RT-MAC) Grand Challenge 2019. Specifically, eight structures in the MR images of HN region, namely submandibular glands, lymph node level II and level III, and parotid glands, were segmented with the deep learning models using a GAN and a fully convolutional network with a U-net. These images were compared with the clinically used atlas-based segmentation.
Results: The mean Dice similarity coefficient (DSC) of the U-net and GAN models was significantly higher than that of the atlas-based method for all the structures (p < 0.05). Specifically, the maximum Hausdorff distance (HD) was significantly lower than that in the atlas method (p < 0.05). Comparing the 2.5D and 3D U-nets, the 3D U-net was superior in segmenting the organs at risk (OAR) for HN patients. The DSC was highest for 0.75-0.85, and the HD was lowest within 5.4 mm of the 2.5D GAN model in all the OARs.
Conclusions: In the current study, we investigated the auto-segmentation of the OAR for HN patients using U-net and GAN models on MR images. Our proposed model is potentially valuable for improving the efficiency of HN RT treatment planning.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC9121028 | PMC |
http://dx.doi.org/10.1002/acm2.13579 | DOI Listing |
Alzheimers Dement
December 2024
Centre de recherche de l’Institut Universitaire de Cardiologie et de Pneumologie de Québec, Quebec, QC, Canada
Background: Our objective was to assess individual and joint relationships between various mesoscale indicators of brain health (e.g., neuronal, metabolic, and vascular integrity) and cognitive function.
View Article and Find Full Text PDFAlzheimers Dement
December 2024
Massachusetts Institute of Technology, Cambridge, MA, USA
Background: Speech is a predominant mode of human communication. Speech digital recordings are inexpensive to record and contain rich health related information. Deep learning, a key method, excels in detecting intricate patterns, however, it requires substantial training data.
View Article and Find Full Text PDFAlzheimers Dement
December 2024
Banner Alzheimer’s Institute, Phoenix, AZ, USA
Background: Amyloid PET (Positron Emission Tomography) is crucial in detecting amyloid burden within the brain. However, the diversity of amyloid tracers and the scarcity of paired data significantly challenge the collaboration between cross‐center studies. In this research, we introduce a novel patch‐based 3D end‐to‐end image transformation model.
View Article and Find Full Text PDFAlzheimers Dement
December 2024
Centre de recherche de l'Institut Universitaire de Cardiologie et de Pneumologie de Québec, Québec, QC, Canada
Background: Our objective was to assess individual and joint relationships between various mesoscale indicators of brain health (e.g., neuronal, metabolic, and vascular integrity) and cognitive function.
View Article and Find Full Text PDFSci Rep
January 2025
School of Mechanical, Electrical, and Information Engineering, Putian University, Putian, 351100, China.
Noise label learning has attracted considerable attention owing to its ability to leverage large amounts of inexpensive and imprecise data. Sharpness aware minimization (SAM) has shown effective improvements in the generalization performance in the presence of noisy labels by introducing adversarial weight perturbations in the model parameter space. However, our experimental observations have shown that the SAM generalization bottleneck primarily stems from the difficulty of finding the correct adversarial perturbation amidst the noisy data.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!