Publications by authors named "Jaeyoung Huh"

Automatic Speech Recognition (ASR) is a technology that converts spoken words into text, facilitating interaction between humans and machines. One of the most common applications of ASR is Speech-To-Text (STT) technology, which simplifies user workflows by transcribing spoken words into text. In the medical field, STT has the potential to significantly reduce the workload of clinicians who rely on typists to transcribe their voice recordings.

View Article and Find Full Text PDF

In contrast to 2-D ultrasound (US) for uniaxial plane imaging, a 3-D US imaging system can visualize a volume along three axial planes. This allows for a full view of the anatomy, which is useful for gynecological (GYN) and obstetrical (OB) applications. Unfortunately, the 3-D US has an inherent limitation in resolution compared to the 2-D US.

View Article and Find Full Text PDF

Recent proposals of deep learning-based beamformers for ultrasound imaging (US) have attracted significant attention as computational efficient alternatives to adaptive and compressive beamformers. Moreover, deep beamformers are versatile in that image post-processing algorithms can be readily combined. Unfortunately, with the existing technology, a large number of beamformers need to be trained and stored for different probes, organs, depth ranges, operating frequency, and desired target 'styles', demanding significant resources such as training data, etc.

View Article and Find Full Text PDF

Recently, deep learning approaches have been successfully used for ultrasound (US) image artifact removal. However, paired high-quality images for supervised training are difficult to obtain in many practical situations. Inspired by the recent theory of unsupervised learning using optimal transport driven CycleGAN (OT-CycleGAN), here, we investigate the applicability of unsupervised deep learning for US artifact removal problems without matched reference data.

View Article and Find Full Text PDF

In ultrasound (US) imaging, various types of adaptive beamforming techniques have been investigated to improve the resolution and the contrast-to-noise ratio of the delay and sum (DAS) beamformers. Unfortunately, the performance of these adaptive beamforming approaches degrades when the underlying model is not sufficiently accurate and the number of channels decreases. To address this problem, here, we propose a deep-learning-based beamformer to generate significantly improved images over widely varying measurement conditions and channel subsampling patterns.

View Article and Find Full Text PDF

In portable, 3-D, and ultra-fast ultrasound imaging systems, there is an increasing demand for the reconstruction of high-quality images from a limited number of radio-frequency (RF) measurements due to receiver (Rx) or transmit (Xmit) event sub-sampling. However, due to the presence of side lobe artifacts from RF sub-sampling, the standard beamformer often produces blurry images with less contrast, which are unsuitable for diagnostic purposes. Existing compressed sensing approaches often require either hardware changes or computationally expensive algorithms, but their quality improvements are limited.

View Article and Find Full Text PDF