Publications by authors named "Kyu Hwan Jung"

Article Synopsis
  • Hospital call centers are vital for supporting cancer patients and accurately identifying their inquiries is essential, but existing models like LSTM and BERT face challenges due to their reliance on labor-intensive, annotated datasets.
  • This study tests GPT-4's ability to classify the intents of cancer patient phone consultations and compares its performance with traditional models like LSTM and BERT, particularly in handling complex queries.
  • GPT-4 demonstrates superior accuracy (85.2%) compared to LSTM (73.7%) and BERT (71.3%) in processing patient inquiries, showing its effectiveness with fewer training examples.
View Article and Find Full Text PDF

Purpose: To evaluate the clinical usefulness of a deep learning-based detection device for multiple abnormal findings on retinal fundus photographs for readers with varying expertise.

Methods: Fourteen ophthalmologists (six residents, eight specialists) assessed 399 fundus images with respect to 12 major ophthalmologic findings, with or without the assistance of a deep learning algorithm, in two separate reading sessions. Sensitivity, specificity, and reading time per image were compared.

View Article and Find Full Text PDF
Article Synopsis
  • The White Blood Cell (WBC) differential test is a commonly used diagnostic tool that involves expert analysis of blood samples to identify abnormalities.
  • Automated digital microscopy offers a more efficient alternative to manual inspections, though it faces challenges with capturing high-quality images due to the need for multiple focal planes.
  • A new dataset of 25,773 image stacks from 72 patients has been created, featuring 18 cell classes (both normal and abnormal), which has been meticulously labeled by experts to support advanced WBC classification using machine learning techniques.
View Article and Find Full Text PDF

Objective: To develop a deep-learning-based bone age prediction model optimized for Korean children and adolescents and evaluate its feasibility by comparing it with a Greulich-Pyle-based deep-learning model.

Materials And Methods: A convolutional neural network was trained to predict age according to the bone development shown on a hand radiograph (bone age) using 21036 hand radiographs of Korean children and adolescents without known bone development-affecting diseases/conditions obtained between 1998 and 2019 (median age [interquartile range {IQR}], 9 [7-12] years; male:female, 11794:9242) and their chronological ages as labels (Korean model). We constructed 2 separate external datasets consisting of Korean children and adolescents with healthy bone development (Institution 1: n = 343; median age [IQR], 10 [4-15] years; male: female, 183:160; Institution 2: n = 321; median age [IQR], 9 [5-14] years; male: female, 164:157) to test the model performance.

View Article and Find Full Text PDF

The identification of abnormal findings manifested in retinal fundus images and diagnosis of ophthalmic diseases are essential to the management of potentially vision-threatening eye conditions. Recently, deep learning-based computer-aided diagnosis systems (CADs) have demonstrated their potential to reduce reading time and discrepancy amongst readers. However, the obscure reasoning of deep neural networks (DNNs) has been the leading cause to reluctance in its clinical use as CAD systems.

View Article and Find Full Text PDF

Backgruound: Since image-based fracture prediction models using deep learning are lacking, we aimed to develop an X-ray-based fracture prediction model using deep learning with longitudinal data.

Methods: This study included 1,595 participants aged 50 to 75 years with at least two lumbosacral radiographs without baseline fractures from 2010 to 2015 at Seoul National University Hospital. Positive and negative cases were defined according to whether vertebral fractures developed during follow-up.

View Article and Find Full Text PDF

Algorithms that automatically identify nodular patterns in chest X-ray (CXR) images could benefit radiologists by reducing reading time and improving accuracy. A promising approach is to use deep learning, where a deep neural network (DNN) is trained to classify and localize nodular patterns (including mass) in CXR images. Such algorithms, however, require enough abnormal cases to learn representations of nodular patterns arising in practical clinical settings.

View Article and Find Full Text PDF

Objectives: Bone age is considered an indicator for the diagnosis of precocious or delayed puberty and a predictor of adult height. We aimed to evaluate the performance of a deep neural network model in assessing rapidly advancing bone age during puberty using elbow radiographs.

Methods: In all, 4437 anteroposterior and lateral pairs of elbow radiographs were obtained from pubertal individuals from two institutions to implement and validate a deep neural network model.

View Article and Find Full Text PDF

Background Previous studies assessing the effects of computer-aided detection on observer performance in the reading of chest radiographs used a sequential reading design that may have biased the results because of reading order or recall bias. Purpose To compare observer performance in detecting and localizing major abnormal findings including nodules, consolidation, interstitial opacity, pleural effusion, and pneumothorax on chest radiographs without versus with deep learning-based detection (DLD) system assistance in a randomized crossover design. Materials and Methods This study included retrospectively collected normal and abnormal chest radiographs between January 2016 and December 2017 (; registration no.

View Article and Find Full Text PDF

Background Studies on the optimal CT section thickness for detecting subsolid nodules (SSNs) with computer-aided detection (CAD) are lacking. Purpose To assess the effect of CT section thickness on CAD performance in the detection of SSNs and to investigate whether deep learning-based super-resolution algorithms for reducing CT section thickness can improve performance. Materials and Methods CT images obtained with 1-, 3-, and 5-mm-thick sections were obtained in patients who underwent surgery between March 2018 and December 2018.

View Article and Find Full Text PDF
Article Synopsis
  • Deep learning techniques have been applied to diagnose cancer through digital pathology images, with different methods focusing on specific cohorts or combining data from multiple cohorts when few images are available.
  • Experimental results indicate a trade-off where fewer models improved detection performance, but required extensive dataset collection from the target cohort.
  • The study proposes new metrics to measure morphological similarities between cohorts, aiming to streamline dataset augmentation without a linear increase in the number of models, thereby optimizing cancer detection performance.
View Article and Find Full Text PDF

Purpose: To evaluate high accumulation of coronary artery calcium (CAC) from retinal fundus images with deep learning technologies as an inexpensive and radiation-free screening method.

Methods: Individuals who underwent bilateral retinal fundus imaging and CAC score (CACS) evaluation from coronary computed tomography scans on the same day were identified. With this database, performances of deep learning algorithms (inception-v3) to distinguish high CACS from CACS of 0 were evaluated at various thresholds for high CACS.

View Article and Find Full Text PDF

Purpose: Gastric cancer remains the leading cause of cancer-related deaths in Northeast Asia. Population-based endoscopic screenings in the region have yielded successful results in early detection of gastric tumors. Endoscopic screening rates are continuously increasing, and there is a need for an automatic computerized diagnostic system to reduce the diagnostic burden.

View Article and Find Full Text PDF

In recent years, artificial intelligence (AI) technologies have greatly advanced and become a reality in many areas of our daily lives. In the health care field, numerous efforts are being made to implement the AI technology for practical medical treatments. With the rapid developments in machine learning algorithms and improvements in hardware performances, the AI technology is expected to play an important role in effectively analyzing and utilizing extensive amounts of health and medical data.

View Article and Find Full Text PDF

To investigate the reproducibility of computer-aided detection (CAD) for detection of pulmonary nodules and masses for consecutive chest radiographies (CXRs) of the same patient within a short-term period. A total of 944 CXRs (Chest PA) with nodules and masses, recorded between January 2010 and November 2016 at the Asan Medical Center, were obtained. In all, 1092 regions of interest for the nodules and mass were delineated using an in-house software.

View Article and Find Full Text PDF

In this study, a deep learning-based method for developing an automated diagnostic support system that detects periodontal bone loss in the panoramic dental radiographs is proposed. The presented method called DeNTNet not only detects lesions but also provides the corresponding teeth numbers of the lesion according to dental federation notation. DeNTNet applies deep convolutional neural networks(CNNs) using transfer learning and clinical prior knowledge to overcome the morphological variation of the lesions and imbalanced training dataset.

View Article and Find Full Text PDF

Objective: To investigate the feasibility of a deep learning-based detection (DLD) system for multiclass lesions on chest radiograph, in comparison with observers.

Methods: A total of 15,809 chest radiographs were collected from two tertiary hospitals (7204 normal and 8605 abnormal with nodule/mass, interstitial opacity, pleural effusion, or pneumothorax). Except for the test set (100 normal and 100 abnormal (nodule/mass, 70; interstitial opacity, 10; pleural effusion, 10; pneumothorax, 10)), radiographs were used to develop a DLD system for detecting multiclass lesions.

View Article and Find Full Text PDF

Objective: To retrospectively assess the effect of CT slice thickness on the reproducibility of radiomic features (RFs) of lung cancer, and to investigate whether convolutional neural network (CNN)-based super-resolution (SR) algorithms can improve the reproducibility of RFs obtained from images with different slice thicknesses.

Materials And Methods: CT images with 1-, 3-, and 5-mm slice thicknesses obtained from 100 pathologically proven lung cancers between July 2017 and December 2017 were evaluated. CNN-based SR algorithms using residual learning were developed to convert thick-slice images into 1-mm slices.

View Article and Find Full Text PDF

Purpose: To develop and evaluate deep learning models that screen multiple abnormal findings in retinal fundus images.

Design: Cross-sectional study.

Participants: For the development and testing of deep learning models, 309 786 readings from 103 262 images were used.

View Article and Find Full Text PDF

Background: We described a novel multi-step retinal fundus image reading system for providing high-quality large data for machine learning algorithms, and assessed the grader variability in the large-scale dataset generated with this system.

Methods: A 5-step retinal fundus image reading tool was developed that rates image quality, presence of abnormality, findings with location information, diagnoses, and clinical significance. Each image was evaluated by 3 different graders.

View Article and Find Full Text PDF

Automatic segmentation of the retinal vasculature and the optic disc is a crucial task for accurate geometric analysis and reliable automated diagnosis. In recent years, Convolutional Neural Networks (CNN) have shown outstanding performance compared to the conventional approaches in the segmentation tasks. In this paper, we experimentally measure the performance gain for Generative Adversarial Networks (GAN) framework when applied to the segmentation tasks.

View Article and Find Full Text PDF

In this paper, we aimed to understand and analyze the outputs of a convolutional neural network model that classifies the laterality of fundus images. Our model not only automatizes the classification process, which results in reducing the labors of clinicians, but also highlights the key regions in the image and evaluates the uncertainty for the decision with proper analytic tools. Our model was trained and tested with 25,911 fundus images (43.

View Article and Find Full Text PDF

This study aimed to compare shallow and deep learning of classifying the patterns of interstitial lung diseases (ILDs). Using high-resolution computed tomography images, two experienced radiologists marked 1200 regions of interest (ROIs), in which 600 ROIs were each acquired using a GE or Siemens scanner and each group of 600 ROIs consisted of 100 ROIs for subregions that included normal and five regional pulmonary disease patterns (ground-glass opacity, consolidation, reticular opacity, emphysema, and honeycombing). We employed the convolution neural network (CNN) with six learnable layers that consisted of four convolution layers and two fully connected layers.

View Article and Find Full Text PDF