CT images for radiotherapy planning are usually acquired in thick slices to reduce the imaging dose, especially for pediatric patients, and to lessen the need for contouring and treatment planning on more slices. However, low through-plane resolution may degrade the accuracy of dose calculations. In this paper, a self-supervised deep learning workflow is proposed to synthesize high through-plane resolution CT images by learning from their high in-plane resolution features. The proposed workflow was designed to facilitate neural networks to learn the mapping from low-resolution (LR) to high-resolution (HR) images in the axial plane. During the inference step, the HR sagittal and coronal images were generated by feeding two parallel-trained neural networks with the respective LR sagittal and coronal images to the trained neural networks. The CT simulation images of a cohort of 75 patients with head and neck cancer (1 mm slice thickness) and 200 CT images of a cohort of 20 lung cancer patients (3 mm slice thickness) were retrospectively investigated in a cross-validation manner. The HR images generated with the proposed method were qualitatively (visual quality, image intensity profiles and preliminary observer study) and quantitatively (mean absolute error, edge keeping index, structural similarity index measurement, information fidelity criterion and visual information fidelity in pixel domain) inspected, while taking the original CT images of the head and neck and lung cancer patients as the reference. The qualitative results showed the capability of the proposed method for generating high through-plane resolution CT images with data from both groups of cancer patients. All the improvements in the measure metrics were confirmed to be statistically significant with paired two-sample-test analysis. The innovative point of the work is that the proposed deep learning workflow for CT image generation with high through-plane resolution in radiotherapy is self-supervised, meaning that it does not rely on ground truth CT images to train the network. In addition, the assumption that the in-plane HR information can supervise the through-plane HR generation is confirmed. We hope that this will inspire more research on this topic to further improve the through-plane resolution of medical images.

Download full-text PDF

Source
http://dx.doi.org/10.1088/1361-6560/ac0684DOI Listing

Publication Analysis

Top Keywords

through-plane resolution
24
high through-plane
16
deep learning
12
images
12
neural networks
12
cancer patients
12
self-supervised deep
8
learning workflow
8
resolution images
8
sagittal coronal
8

Similar Publications

Magnetic resonance images are often acquired as several 2D slices and stacked into a 3D volume, yielding a lower through-plane resolution than in-plane resolution. Many super-resolution (SR) methods have been proposed to address this, including those that use the inherent high-resolution (HR) in-plane signal as HR data to train deep neural networks. Techniques with this approach are generally both self-supervised and internally trained, so no external training data is required.

View Article and Find Full Text PDF

Animal models are pivotal in disease research and the advancement of therapeutic methods. The translation of results from these models to clinical applications is enhanced by employing technologies which are consistent for both humans and animals, like Magnetic Resonance Imaging (MRI), offering the advantage of longitudinal disease evaluation without compromising animal welfare. However, current animal MRI techniques predominantly employ 2D acquisitions due to constraints related to organ size, scan duration, image quality, and hardware limitations.

View Article and Find Full Text PDF

Purpose: Eye morphology varies significantly across the population, especially for the orbit and optic nerve. These variations limit the feasibility and robustness of generalizing population-wise features of eye organs to an unbiased spatial reference.

Approach: To tackle these limitations, we propose a process for creating high-resolution unbiased eye atlases.

View Article and Find Full Text PDF
Article Synopsis
  • MRI is essential for diagnosing and monitoring multiple sclerosis (MS), but standard scans often have limited resolution due to thick slices, which affects automated analysis.
  • This study introduces a single-image super-resolution (SR) reconstruction framework using convolutional neural networks (CNN) to enhance MRI resolution in individuals with MS.
  • The results show that the SR method significantly improves MRI reconstruction accuracy and lesion segmentation, making it a valuable tool for analyzing low-resolution MRI data in clinical settings.
View Article and Find Full Text PDF
Article Synopsis
  • Ultrasound microvascular imaging (UMI) techniques like ultrafast power Doppler imaging (uPDI) and ultrasound localization microscopy (ULM) face challenges with low image quality due to noise from plane wave transmissions.
  • The study introduces a deep learning model called Yformer, which combines convolution and Transformer architectures to enhance UMI by effectively estimating noise and signal power, leading to improved image quality and lower computational costs.
  • In vivo tests on rat brains show that Yformer achieves high structural similarity (SSIM > 0.95) and significantly increases the resolution of liver ULM, demonstrating excellent adaptability across various datasets.
View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!