Background And Purpose: To investigate the feasibility of synthesizing computed tomography (CT) images from magnetic resonance (MR) images using generative adversarial networks (GANs) for nasopharyngeal carcinoma (NPC) intensity-modulated radiotherapy (IMRT) planning.

Materials And Methods: Conventional T1-weighted MR images and CT images were acquired from 173 NPC patients. The MR and CT images of 28 patients were randomly chosen as the independent tested set. The remaining images were used to build a conditional GAN (cGAN) and a cycle-consistency GAN (cycleGAN). A U-net was used as the generator in cGAN, whereas a residual-Unet was used as the generator in cycleGAN. The cGAN was trained using the deformable registered MR-CT image pairs, whereas the cycleGAN was trained using the unregistered MR and CT images. The generated synthetic CT (SCT) images from cGAN and cycleGAN were compared with the true CT images with respect to their Hounsfield Unit (HU) discrepancy and dosimetric accuracy for NPC IMRT plans.

Results: The mean absolute errors within the body were 69.67 ± 9.27 HU and 100.62 ± 7.39 HU for the cGAN and cycleGAN, respectively. The 2%/2-mm γ passing rates were (98.68 ± 0.94)% and (98.52 ± 1.13)% for the cGAN and cycleGAN, respectively. Meanwhile, the absolute dose discrepancies within the regions of interest were (0.49 ± 0.24)% and (0.62 ± 0.36)%, respectively.

Conclusion: Both cGAN and cycleGAN could swiftly generate accurate SCT volume images from MR images, with high dosimetric accuracy for NPC IMRT planning. cGAN was preferable if high-quality MR-CT image pairs were available.

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.radonc.2020.06.049DOI Listing

Publication Analysis

Top Keywords

cgan cyclegan
16
images
12
computed tomography
8
tomography images
8
images generated
8
generative adversarial
8
adversarial networks
8
nasopharyngeal carcinoma
8
images images
8
cgan
8

Similar Publications

Generative Adversarial Networks (GANs) have emerged as a powerful tool in artificial intelligence, particularly for unsupervised learning. This systematic review analyzes GAN applications in healthcare, focusing on image and signal-based studies across various clinical domains. Following Preferred Reporting Items for Systematic reviews and Meta-Analyses (PRISMA) guidelines, we reviewed 72 relevant journal articles.

View Article and Find Full Text PDF
Article Synopsis
  • The advancement of deep learning in medical imaging has improved AI capabilities but has created challenges like the need for large training datasets and extensive labeling efforts.
  • Generative adversarial networks (GANs) offer innovative solutions by generating synthetic images for data augmentation and enhancing medical image processing tasks, which reduces reliance on labeled data.
  • This paper provides radiologists new to GAN technology with insights on various GAN architectures, training considerations, and practical applications, particularly in brain imaging, to encourage further research in the field.
View Article and Find Full Text PDF

Deep learning is transforming bioimage analysis, but its application in single-cell segmentation is limited by the lack of large, diverse annotated datasets. We addressed this by introducing a CycleGAN-based architecture, cGAN-Seg, that enhances the training of cell segmentation models with limited annotated datasets. During training, cGAN-Seg generates annotated synthetic phase-contrast or fluorescent images with morphological details and nuances closely mimicking real images.

View Article and Find Full Text PDF

Hematoxylin and eosin staining can be hazardous, expensive, and prone to error and variability. To circumvent these issues, artificial intelligence/machine learning models such as generative adversarial networks (GANs), are being used to 'virtually' stain unstained tissue images indistinguishable from chemically stained tissue. Frameworks such as deep convolutional GANs (DCGAN) and conditional GANs (CGANs) have successfully generated highly reproducible 'stained' images.

View Article and Find Full Text PDF

Feasibility of proton dosimetry overriding planning CT with daily CBCT elaborated through generative artificial intelligence tools.

Comput Assist Surg (Abingdon)

December 2024

Department of Electronics, Information and Bioengineering, Politecnico di Milano, Milan, Italy.

Radiotherapy commonly utilizes cone beam computed tomography (CBCT) for patient positioning and treatment monitoring. CBCT is deemed to be secure for patients, making it suitable for the delivery of fractional doses. However, limitations such as a narrow field of view, beam hardening, scattered radiation artifacts, and variability in pixel intensity hinder the direct use of raw CBCT for dose recalculation during treatment.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!