Publications by authors named "Guang-bin Huang"

Purpose: Conventional obstructive sleep apnea (OSA) diagnosis via polysomnography can be costly and inaccessible. Recent advances in artificial intelligence (AI) have enabled the use of craniofacial photographs to diagnose OSA. This meta-analysis aims to clarify the diagnostic accuracy of this innovative approach.

View Article and Find Full Text PDF
Article Synopsis
  • Internal iliac artery ligation (IIAL) is a method used to manage severe pelvic fractures and has been debated for its effectiveness and safety.
  • A systematic review analyzed 171 articles, narrowing it down to 22 studies that documented IIAL's impact on hemostasis.
  • Results showed an 80% effectiveness rate in stopping bleeding, with no reported ischemic complications to pelvic organs, suggesting IIAL is a viable option for patients with unstable pelvic fractures and abdominal injuries.
View Article and Find Full Text PDF

Auditability and verifiability are critical elements in establishing trustworthiness in federated learning (FL). These principles promote transparency, accountability, and independent validation of FL processes. Incorporating auditability and verifiability is imperative for building trust and ensuring the robustness of FL methodologies.

View Article and Find Full Text PDF

Objective: Transcranial direct current stimulation (tDCS) is a non-invasive brain stimulation technique used to generate conduction currents in the head and disrupt brain functions. To rapidly evaluate the tDCS-induced current density in near real-time, this paper proposes a deep learning-based emulator, named DeeptDCS.

Methods: The emulator leverages Attention U-net taking the volume conductor models (VCMs) of head tissues as inputs and outputting the three-dimensional current density distribution across the entire head.

View Article and Find Full Text PDF

The Tracking-by-segmentation framework is widely used in visual tracking to handle severe appearance change such as deformation and occlusion. Tracking-by-segmentation methods first segment the target object from the background, then use the segmentation result to estimate the target state. In existing methods, target segmentation is formulated as a superpixel labeling problem constrained by a target likelihood constraint, a spatial smoothness constraint and a temporal consistency constraint.

View Article and Find Full Text PDF

Rib fracture is the most common injury in chest trauma. Most of patients with rib fractures were treated conservatively, but up to 50% of patients, especially those with combined injury such as flail chest, presented chronic pain or chest wall deformities, and more than 30% had long-term disabilities, unable to retain a full-time job. In the past two decades, surgery for rib fractures has achieving good outcomes.

View Article and Find Full Text PDF

In this paper, we propose a novel transductive pseudo-labeling based method for deep semi-supervised image recognition. Inspired from the superiority of pseudo labels inferred by label propagation compared with those inferred from network, we argue that information flow from labeled data to unlabeled data should be kept noiseless and with minimum loss. Previous research works use scarce labeled data for feature learning and solely consider the relationship between two feature vectors to construct the similarity graph in feature space, which causes two problems that ultimately lead to noisy and incomplete information flow from labeled data to unlabeled data.

View Article and Find Full Text PDF

Semi-supervised learning has largely alleviated the strong demand for large amount of annotations in deep learning. However, most of the methods have adopted a common assumption that there is always labeled data from the same class of unlabeled data, which is impractical and restricted for real-world applications. In this research work, our focus is on semi-supervised learning when the categories of unlabeled data and labeled data are disjoint from each other.

View Article and Find Full Text PDF

Principal component analysis network (PCANet), as an unsupervised shallow network, demonstrates noticeable effectiveness on datasets of various volumes. It carries a two-layer convolution with PCA as filter learning method, followed by a block-wise histogram post-processing stage. Following the structure of PCANet, extreme learning machine auto-encoder (ELM-AE) variants are employed to replace the PCA's role, which come from extreme learning machine network (ELMNet) and hierarchical ELMNet.

View Article and Find Full Text PDF

Dictionary learning is a widely adopted approach for image classification. Existing methods focus either on finding a dictionary that produces discriminative sparse representation, or on enforcing priors that best describe the dataset distribution. In many cases, the dataset size is often small with large intra-class variability and nondiscriminative feature space.

View Article and Find Full Text PDF

Advanced chemometric analysis is required for rapid and reliable determination of physical and/or chemical components in complex gas mixtures. Based on infrared (IR) spectroscopic/sensing techniques, we propose an advanced regression model based on the extreme learning machine (ELM) algorithm for quantitative chemometric analysis. The proposed model makes two contributions to the field of advanced chemometrics.

View Article and Find Full Text PDF

Recently, preserving geometry information of data while learning representations have attracted increasing attention in intelligent machine fault diagnosis. Existing geometry preserving methods require to predefine the similarities between data points in the original data space. The predefined affinity matrix, which is also known as the similarity matrix, is then used to preserve geometry information during the process of representations learning.

View Article and Find Full Text PDF

Convolutional dictionary learning (CDL) aims to learn a structured and shift-invariant dictionary to decompose signals into sparse representations. While yielding superior results compared to traditional sparse coding methods on various signal and image processing tasks, most CDL methods have difficulties handling large data, because they have to process all images in the dataset in a single pass. Therefore, recent research has focused on online CDL (OCDL) which updates the dictionary with sequentially incoming signals.

View Article and Find Full Text PDF

Purpose: To summarize and analyze the early treatment of multiple injuries combined with severe pelvic fractures, especially focus on the hemostasis methods for severe pelvic fractures, so as to improve the successful rate of rescue for the fatal hemorrhagic shock caused by pelvic fractures.

Methods: A retrospective analysis was conducted in 68 cases of multiple trauma combined with severe pelvic fractures in recent 10 years (from Jan. 2006 to Dec.

View Article and Find Full Text PDF

In many practical transfer learning scenarios, the feature distribution is different across the source and target domains (i.e., nonindependent identical distribution).

View Article and Find Full Text PDF

Noise that afflicts natural images, regardless of the source, generally disturbs the perception of image quality by introducing a high-frequency random element that, when severe, can mask image content. Except at very low levels, where it may play a purpose, it is annoying. There exist significant statistical differences between distortion-free natural images and noisy images that become evident upon comparing the empirical probability distribution histograms of their discrete wavelet transform (DWT) coefficients.

View Article and Find Full Text PDF

Electronic tongue (E-Tongue), as a novel taste analysis tool, shows a promising perspective for taste recognition. In this paper, we constructed a voltammetric E-Tongue system and measured 13 different kinds of liquid samples, such as tea, wine, beverage, functional materials, etc. Owing to the noise of system and a variety of environmental conditions, the acquired E-Tongue data shows inseparable patterns.

View Article and Find Full Text PDF

Study Objectives: Automated sleep staging has been previously limited by a combination of clinical and physiological heterogeneity. Both factors are in principle addressable with large data sets that enable robust calibration. However, the impact of sample size remains uncertain.

View Article and Find Full Text PDF

Polychronous neuronal group (PNG), a type of cell assembly, is one of the putative mechanisms for neural information representation. According to the reader-centric definition, some readout neurons can become selective to the information represented by polychronous neuronal groups under ongoing activity. Here, in computational models, we show that the frequently activated polychronous neuronal groups can be learned by readout neurons with joint weight-delay spike-timing-dependent plasticity.

View Article and Find Full Text PDF
Article Synopsis
  • The text includes a collection of research topics related to neural circuits, mental disorders, and computational models in neuroscience.
  • It features various studies examining the functional advantages of neural heterogeneity, propagation waves in the visual cortex, and dendritic mechanisms crucial for precise neuronal functioning.
  • The research covers a range of applications, from understanding complex brain rhythms to modeling auditory processing and investigating the effects of neural regulation on behavior.
View Article and Find Full Text PDF

Data may often contain noise or irrelevant information, which negatively affect the generalization capability of machine learning algorithms. The objective of dimension reduction algorithms, such as principal component analysis (PCA), non-negative matrix factorization (NMF), random projection (RP), and auto-encoder (AE), is to reduce the noise or irrelevant information of the data. The features of PCA (eigenvectors) and linear AE are not able to represent data as parts (e.

View Article and Find Full Text PDF

Big dimensional data is a growing trend that is emerging in many real world contexts, extending from web mining, gene expression analysis, protein-protein interaction to high-frequency financial data. Nowadays, there is a growing consensus that the increasing dimensionality poses impeding effects on the performances of classifiers, which is termed as the "peaking phenomenon" in the field of machine intelligence. To address the issue, dimensionality reduction is commonly employed as a preprocessing step on the Big dimensional data before building the classifiers.

View Article and Find Full Text PDF

Numerous state-of-the-art perceptual image quality assessment (IQA) algorithms share a common two-stage process: distortion description followed by distortion effects pooling. As for the first stage, the distortion descriptors or measurements are expected to be effective representatives of human visual variations, while the second stage should well express the relationship among quality descriptors and the perceptual visual quality. However, most of the existing quality descriptors (e.

View Article and Find Full Text PDF