Publications by authors named "Myeongkyun Kang"

Article Synopsis
  • There is increased interest in using high-resolution T2 (hrT2) imaging for vestibular schwannoma (VS) and cochlea research, due to fewer side effects compared to contrast-enhanced T1 (ceT1) imaging.
  • The challenge lies in a lack of annotated hrT2 data, which is crucial for developing effective segmentation models.
  • The proposed solution is a target-aware unsupervised domain adaptation framework that focuses on the unique visual characteristics and sizes of these objects, improving image translation and preserving the quality of small structures.
View Article and Find Full Text PDF

Purpose: The purpose of this study was to develop a deep learning model for predicting the axial length (AL) of eyes using optical coherence tomography (OCT) images.

Methods: We retrospectively included patients with AL measurements and OCT images taken within 3 months. We utilized a 5-fold cross-validation with the ResNet-152 architecture, incorporating horizontal OCT images, vertical OCT images, and dual-input images.

View Article and Find Full Text PDF

Purpose: Deep learning-based image enhancement has significant potential in the field of ultrasound image processing, as it can accurately model complicated nonlinear artifacts and noise, such as ultrasonic speckle patterns. However, training deep learning networks to acquire reference images that are clean and free of noise presents significant challenges. This study introduces an unsupervised deep learning framework, termed speckle-to-speckle (S2S), designed for speckle and noise suppression.

View Article and Find Full Text PDF

Unsupervised domain adaptation (UDA) aims to transfer knowledge in previous and related labeled datasets (sources) to a new unlabeled dataset (target). Despite the impressive performance, existing approaches have largely focused on image-based UDA only, and video-based UDA has been relatively understudied and received less attention due to the difficulty of adapting diverse modal video features and modeling temporal associations efficiently. To address this, existing studies use optical flow to capture motion cues between in-domain consecutive frames, but is limited by heavy compute requirements and modeling flow patterns across diverse domains is equally challenging.

View Article and Find Full Text PDF
Article Synopsis
  • Multi-organ CT segmentation uses deep learning models, but training them effectively is difficult due to varying data quality and labeling across institutions.
  • Federated learning offers a way to train models on these diverse datasets without sharing the actual data, but it can struggle with accuracy due to 'catastrophic forgetting' when updating models locally.
  • The authors propose a solution using knowledge distillation to enhance local model training and implement this in a multi-head U-Net architecture, resulting in improved accuracy and efficiency across eight abdominal CT datasets compared to existing methods.
View Article and Find Full Text PDF

One-shot federated learning (FL) has emerged as a promising solution in scenarios where multiple communication rounds are not practical. Notably, as feature distributions in medical data are less discriminative than those of natural images, robust global model training with FL is non-trivial and can lead to overfitting. To address this issue, we propose a novel one-shot FL framework leveraging Image Synthesis and Client model Adaptation (FedISCA) with knowledge distillation (KD).

View Article and Find Full Text PDF

Models trained on datasets with texture bias usually perform poorly on out-of-distribution samples since biased representations are embedded into the model. Recently, various image translation and debiasing methods have attempted to disentangle texture biased representations for downstream tasks, but accurately discarding biased features without altering other relevant information is still challenging. In this paper, we propose a novel framework that leverages image translation to generate additional training images using the content of a source image and the texture of a target image with a different bias property to explicitly mitigate texture bias when training a model on a target task.

View Article and Find Full Text PDF

Although ultrasound plays an important role in the diagnosis of chronic kidney disease (CKD), image interpretation requires extensive training. High operator variability and limited image quality control of ultrasound images have made the application of computer-aided diagnosis (CAD) challenging. This study assessed the effect of integrating computer-extracted measurable features with the convolutional neural network (CNN) on the ultrasound image CAD accuracy of CKD.

View Article and Find Full Text PDF

Chest computed tomography (CT) based analysis and diagnosis of the Coronavirus Disease 2019 (COVID-19) plays a key role in combating the outbreak of the pandemic that has rapidly spread worldwide. To date, the disease has infected more than 18 million people with over 690k deaths reported. Reverse transcription polymerase chain reaction (RT-PCR) is the current gold standard for clinical diagnosis but may produce false positives; thus, chest CT based diagnosis is considered more viable.

View Article and Find Full Text PDF

Background: It is difficult to distinguish subtle differences shown in computed tomography (CT) images of coronavirus disease 2019 (COVID-19) and bacterial pneumonia patients, which often leads to an inaccurate diagnosis. It is desirable to design and evaluate interpretable feature extraction techniques to describe the patient's condition.

Methods: This is a retrospective cohort study of 170 confirmed patients with COVID-19 or bacterial pneumonia acquired at Yeungnam University Hospital in Daegu, Korea.

View Article and Find Full Text PDF