Publications by authors named "Donghwi Hwang"

Bone scans play an important role in skeletal lesion assessment, but gamma cameras exhibit challenges with low sensitivity and high noise levels. Deep learning (DL) has emerged as a promising solution to enhance image quality without increasing radiation exposure or scan time. However, existing self-supervised denoising methods, such as Noise2Noise (N2N), may introduce deviations from the clinical standard in bone scans.

View Article and Find Full Text PDF

Purpose: Effective radiation therapy requires accurate segmentation of head and neck cancer, one of the most common types of cancer. With the advancement of deep learning, people have come up with various methods that use positron emission tomography-computed tomography to get complementary information. However, these approaches are computationally expensive because of the separation of feature extraction and fusion functions and do not make use of the high sensitivity of PET.

View Article and Find Full Text PDF

Purpose: Since accurate lung cancer segmentation is required to determine the functional volume of a tumor in [F]FDG PET/CT, we propose a two-stage U-Net architecture to enhance the performance of lung cancer segmentation using [F]FDG PET/CT.

Methods: The whole-body [F]FDG PET/CT scan data of 887 patients with lung cancer were retrospectively used for network training and evaluation. The ground-truth tumor volume of interest was drawn using the LifeX software.

View Article and Find Full Text PDF

Purpose: Quantitative thyroid single-photon emission computed tomography/computed tomography (SPECT/CT) requires computed tomography (CT)-based attenuation correction and manual thyroid segmentation on CT for %thyroid uptake measurements. Here, we aimed to develop a deep-learning-based CT-free quantitative thyroid SPECT that can generate an attenuation map (μ-map) and automatically segment the thyroid.

Methods: Quantitative thyroid SPECT/CT data (n = 650) were retrospectively analyzed.

View Article and Find Full Text PDF

Purpose: This study aims to compare two approaches using only emission PET data and a convolution neural network (CNN) to correct the attenuation (μ) of the annihilation photons in PET.

Methods: One of the approaches uses a CNN to generate μ-maps from the non-attenuation-corrected (NAC) PET images (μ-CNN). In the other method, CNN is used to improve the accuracy of μ-maps generated using maximum likelihood estimation of activity and attenuation (MLAA) reconstruction (μ-CNN).

View Article and Find Full Text PDF

We propose a deep learning-based data-driven respiratory phase-matched gated-PET attenuation correction (AC) method that does not need a gated-CT. The proposed method is a multi-step process that consists of data-driven respiratory gating, gated attenuation map estimation using maximum-likelihood reconstruction of attenuation and activity (MLAA) algorithm, and enhancement of the gated attenuation maps using convolutional neural network (CNN). The gated MLAA attenuation maps enhanced by the CNN allowed for the phase-matched AC of gated-PET images.

View Article and Find Full Text PDF
.

Nucl Med Mol Imaging

December 2020

Purpose: Early deep-learning-based image denoising techniques mainly focused on a fully supervised model that learns how to generate a clean image from the noisy input (noise2clean: N2C). The aim of this study is to explore the feasibility of the self-supervised methods (noise2noise: N2N and noiser2noise: Nr2N) for PET image denoising based on the measured PET data sets by comparing their performance with the conventional N2C model.

Methods: For training and evaluating the networks, F-FDG brain PET/CT scan data of 14 patients was retrospectively used (10 for training and 4 for testing).

View Article and Find Full Text PDF

Personalized dosimetry with high accuracy is crucial owing to the growing interests in personalized medicine. The direct Monte Carlo simulation is considered as a state-of-art voxel-based dosimetry technique; however, it incurs an excessive computational cost and time. To overcome the limitations of the direct Monte Carlo approach, we propose using a deep convolutional neural network (CNN) for the voxel dose prediction.

View Article and Find Full Text PDF

We propose a new deep learning-based approach to provide more accurate whole-body PET/MRI attenuation correction than is possible with the Dixon-based 4-segment method. We use activity and attenuation maps estimated using the maximum-likelihood reconstruction of activity and attenuation (MLAA) algorithm as inputs to a convolutional neural network (CNN) to learn a CT-derived attenuation map. The whole-body F-FDG PET/CT scan data of 100 cancer patients (38 men and 62 women; age, 57.

View Article and Find Full Text PDF

The objective of this study is to develop a convolutional neural network (CNN) for computed tomography (CT) image super-resolution. The network learns an end-to-end mapping between low (thick-slice thickness) and high (thin-slice thickness) resolution images using the modified U-Net. To verify the proposed method, we train and test the CNN using axially averaged data of existing thin-slice CT images as input and their middle slice as the label.

View Article and Find Full Text PDF

Simultaneous reconstruction of activity and attenuation using the maximum-likelihood reconstruction of activity and attenuation (MLAA) augmented by time-of-flight information is a promising method for PET attenuation correction. However, it still suffers from several problems, including crosstalk artifacts, slow convergence speed, and noisy attenuation maps (μ-maps). In this work, we developed deep convolutional neural networks (CNNs) to overcome these MLAA limitations, and we verified their feasibility using a clinical brain PET dataset.

View Article and Find Full Text PDF