Purpose: To suggest an attention-aware, cycle-consistent generative adversarial network (A-CycleGAN) enhanced with variational autoencoding (VAE) as a superior alternative to current state-of-the-art MR-to-CT image translation methods.

Materials And Methods: An attention-gating mechanism is incorporated into a discriminator network to encourage a more parsimonious use of network parameters, whereas VAE enhancement enables deeper discrimination architectures without inhibiting model convergence. Findings from 60 patients with head, neck, and brain cancer were used to train and validate A-CycleGAN, and findings from 30 patients were used for the holdout test set and were used to report final evaluation metric results using mean absolute error (MAE) and peak signal-to-noise ratio (PSNR).

Results: A-CycleGAN achieved superior results compared with U-Net, a generative adversarial network (GAN), and a cycle-consistent GAN. The A-CycleGAN averages, 95% confidence intervals (CIs), and Wilcoxon signed-rank two-sided test statistics are shown for MAE (19.61 [95% CI: 18.83, 20.39], = .0104), structure similarity index metric (0.778 [95% CI: 0.758, 0.798], = .0495), and PSNR (62.35 [95% CI: 61.80, 62.90], = .0571).

Conclusion: A-CycleGANs were a superior alternative to state-of-the-art MR-to-CT image translation methods.© RSNA, 2020.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC8017410PMC
http://dx.doi.org/10.1148/ryai.2020190027DOI Listing

Publication Analysis

Top Keywords

mr-to-ct image
12
image translation
12
generative adversarial
12
cycle-consistent generative
8
adversarial network
8
superior alternative
8
state-of-the-art mr-to-ct
8
findings patients
8
attention-aware discrimination
4
discrimination mr-to-ct
4

Similar Publications

CT synthesis with deep learning for MR-only radiotherapy planning: a review.

Biomed Eng Lett

November 2024

Graduate School of Artificial Intelligence, Ulsan National Institute of Science and Technology, 50, Unist-gil, Ulsan, 44919 Republic of Korea.

MR-only radiotherapy planning is beneficial from the perspective of both time and safety since it uses synthetic CT for radiotherapy dose calculation instead of real CT scans. To elevate the accuracy of treatment planning and apply the results in practice, various methods have been adopted, among which deep learning models for image-to-image translation have shown good performance by retaining domain-invariant structures while changing domain-specific details. In this paper, we present an overview of diverse deep learning approaches to MR-to-CT synthesis, divided into four classes: convolutional neural networks, generative adversarial networks, transformer models, and diffusion models.

View Article and Find Full Text PDF

CycleSGAN: A cycle-consistent and semantics-preserving generative adversarial network for unpaired MR-to-CT image synthesis.

Comput Med Imaging Graph

October 2024

Institute of Medical Robotics, School of Biomedical Engineering, Shanghai Jiao Tong University, No. 800, Dongchuan Road, Shanghai, 200240, China. Electronic address:

Article Synopsis
  • CycleGAN has been used to create CT images from MR images using unpaired data, but it struggles with maintaining anatomical accuracy, which is critical for clinical applications.
  • The proposed CycleSGAN improves this process by integrating semantic information through a new structure that includes two types of adversarial learning: one focusing on image appearance and the other on structural consistency.
  • Experimental results show that CycleSGAN outperforms existing methods in producing more accurate and visually appealing synthetic CT images from unpaired MR data.
View Article and Find Full Text PDF

Background: The use of magnetic resonance (MR) imaging for proton therapy treatment planning is gaining attention as a highly effective method for guidance. At the core of this approach is the generation of computed tomography (CT) images from MR scans. However, the critical issue in this process is accurately aligning the MR and CT images, a task that becomes particularly challenging in frequently moving body areas, such as the head-and-neck.

View Article and Find Full Text PDF

HaN-Seg: The head and neck organ-at-risk CT and MR segmentation challenge.

Radiother Oncol

September 2024

University of Ljubljana, Faculty Electrical Engineering, Tržaška cesta 25, Ljubljana 1000, Slovenia.

Background And Purpose: To promote the development of auto-segmentation methods for head and neck (HaN) radiation treatment (RT) planning that exploit the information of computed tomography (CT) and magnetic resonance (MR) imaging modalities, we organized HaN-Seg: The Head and Neck Organ-at-Risk CT and MR Segmentation Challenge.

Materials And Methods: The challenge task was to automatically segment 30 organs-at-risk (OARs) of the HaN region in 14 withheld test cases given the availability of 42 publicly available training cases. Each case consisted of one contrast-enhanced CT and one T1-weighted MR image of the HaN region of the same patient, with up to 30 corresponding reference OAR delineation masks.

View Article and Find Full Text PDF

Background: Magnetic resonance imaging (MRI) plays an increasingly important role in radiotherapy, enhancing the accuracy of target and organs at risk delineation, but the absence of electron density information limits its further clinical application. Therefore, the aim of this study is to develop and evaluate a novel unsupervised network (cycleSimulationGAN) for unpaired MR-to-CT synthesis.

Methods: The proposed cycleSimulationGAN in this work integrates contour consistency loss function and channel-wise attention mechanism to synthesize high-quality CT-like images.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!