Publications by authors named "James Kwok"

Partial point cloud registration aims to align partial scans into a shared coordinate system. While learning-based partial point cloud registration methods have achieved remarkable progress, they often fail to take full advantage of the relative positional relationships both within (intra-) and between (inter-) point clouds. This oversight hampers their ability to accurately identify overlapping regions and search for reliable correspondences.

View Article and Find Full Text PDF

Diverting loop ileostomy is performed after colectomy to allow for anastomotic healing, and prevention of pelvic sepsis when an anastomotic leak occurs. There is no consensus on the optimal timing of ileostomy closure, and there is limited data on complications associated with ileostomy closure greater than 12 months after creation. The aim of this study is to investigate outcomes of delayed loop ileostomy closure greater than 12 months after creation.

View Article and Find Full Text PDF

Bipolar disorder is a chronic psychiatric condition typically managed using mood stabilizers such as valproic acid, lithium, and atypical antipsychotics, the former which is absorbed in the gastrointestinal tract. This case report presents the challenges encountered in managing bipolar disorder in a patient with a history of extensive gastrointestinal (GI) issues. The patient was initially treated with lithium but experienced adverse effects, prompting a switch to valproic acid (VPA) tablets.

View Article and Find Full Text PDF

Sample selection approaches are popular in robust learning from noisy labels. However, how to control the selection process properly so that deep networks can benefit from the memorization effect is a hard problem. In this paper, motivated by the success of automated machine learning (AutoML), we propose to control the selection process by bi-level optimization.

View Article and Find Full Text PDF

Self-supervised learning enables networks to learn discriminative features from massive data itself. Most state-of-the-art methods maximize the similarity between two augmentations of one image based on contrastive learning. By utilizing the consistency of two augmentations, the burden of manual annotations can be freed.

View Article and Find Full Text PDF

Training deep neural networks (DNNs) typically requires massive computational power. Existing DNNs exhibit low time and storage efficiency due to the high degree of redundancy. In contrast to most existing DNNs, biological and social networks with vast numbers of connections are highly efficient and exhibit scale-free properties indicative of the power law distribution, which can be originated by preferential attachment in growing networks.

View Article and Find Full Text PDF

Feature extractor plays a critical role in text recognition (TR), but customizing its architecture is relatively less explored due to expensive manual tweaking. In this article, inspired by the success of neural architecture search (NAS), we propose to search for suitable feature extractors. We design a domain-specific search space by exploring principles for having good feature extractors.

View Article and Find Full Text PDF

Learning embeddings for entities and relations in knowledge graph (KG) have benefited many downstream tasks. In recent years, scoring functions, the crux of KG learning, have been human designed to measure the plausibility of triples and capture different kinds of relations in KGs. However, as relations exhibit intricate patterns that are hard to infer before training, none of them consistently perform the best on benchmark tasks.

View Article and Find Full Text PDF

In real-world applications, it is important for machine learning algorithms to be robust against data outliers or corruptions. In this paper, we focus on improving the robustness of a large class of learning algorithms that are formulated as low-rank semi-definite programming (SDP) problems. Traditional formulations use the square loss, which is notorious for being sensitive to outliers.

View Article and Find Full Text PDF

A least squares support vector machine (LS-SVM) offers performance comparable to that of SVMs for classification and regression. The main limitation of LS-SVM is that it lacks sparsity compared with SVMs, making LS-SVM unsuitable for handling large-scale data due to computation and memory costs. To obtain sparse LS-SVM, several pruning methods based on an iterative strategy were recently proposed but did not consider the quantity constraint on the number of reserved support vectors, as widely used in real-life applications.

View Article and Find Full Text PDF

Convolutional sparse coding (CSC) can learn representative shift-invariant patterns from multiple kinds of data. However, existing CSC methods can only model noises from Gaussian distribution, which is restrictive and unrealistic. In this paper, we propose a generalized CSC model capable of dealing with complicated unknown noise.

View Article and Find Full Text PDF

The surgical treatment of Charcot foot is a widely debated topic, with issues ranging from when to operate to how to properly correct a deformity. Historically, correction of a severe deformity was attempted in 1 acute surgical procedure that frequently required open reduction and internal fixation through large incisions. This 1-time procedure would often result in complications including under- or overcorrection of the deformity, neurovascular injury, or incision dehiscence leading to possible soft-tissue infection or osteomyelitis.

View Article and Find Full Text PDF

Many machine learning problems involve learning a low-rank positive semidefinite matrix. However, existing solvers for this low-rank semidefinite program (SDP) are often expensive. In this paper, by factorizing the target matrix as a product of two matrices and using a Courant penalty to penalize for their difference, we reformulate the SDP as a biconvex optimization problem.

View Article and Find Full Text PDF

Low-rank modeling has many important applications in computer vision and machine learning. While the matrix rank is often approximated by the convex nuclear norm, the use of nonconvex low-rank regularizers has demonstrated better empirical performance. However, the resulting optimization problem is much more challenging.

View Article and Find Full Text PDF

Convolutional sparse coding (CSC) improves sparse coding by learning a shift-invariant dictionary from the data. However, most existing CSC algorithms operate in the batch mode and are computationally expensive. In this paper, we alleviate this problem by online learning.

View Article and Find Full Text PDF

Hepatocellular carcinoma (HCC) is one of the leading causes of cancer-related deaths because of frequent late detection and poor therapeutic outcomes, necessitating the need to identify effective biomarkers for early diagnosis and new therapeutic targets for effective treatment. Long noncoding RNAs (lncRNAs) have emerged as promising molecular markers for diagnosis and treatment. Through analysis of patient samples from The Cancer Genome Atlas database, we identified putative lncRNAs dysregulated in HCC and by its risk factors, hepatitis infection and alcohol consumption.

View Article and Find Full Text PDF

Objective: While tobacco smoking is a well-known risk factor for head and neck squamous cell carcinoma (HNSCC), the molecular mechanisms underlying tobacco-induced HNSCC remain unclear. This study sought to comprehensively identify microRNA (miRNA) alterations and evaluate their clinical relevance in smoking-induced HNSCC pathogenesis and progression.

Materials And Methods: Using small RNA-sequencing data and clinical data from 145 HNSCC patients, we performed a series of differential expression and correlation analyses to identify a panel of tobacco-dysregulated miRNAs associated with key clinical characteristics in HNSCC.

View Article and Find Full Text PDF

Alcohol consumption and chronic hepatitis B virus (HBV) infection are two well-established risk factors for Hepatocellular carcinoma (HCC); however, there remains a limited understanding of the molecular pathway behind the pathogenesis and progression behind HCC, and how alcohol promotes carcinogenesis in the context of HBV+ HCC. Using next-generation sequencing data from 130 HCC patients and 50 normal liver tissues, we identified a panel of microRNAs that are significantly dysregulated by alcohol consumption in HBV+ patients. In particular, two microRNAs, miR-944 and miR-223-3p, showed remarkable correlation with clinical indication and genomic alterations.

View Article and Find Full Text PDF

The semisupervised least squares support vector machine (LS-SVM) is an important enhancement of least squares support vector machines in semisupervised learning. Given that most data collected from the real world are without labels, semisupervised approaches are more applicable than standard supervised approaches. Although a few training methods for LS-SVM exist, the problem of deriving the optimal decision hyperplane efficiently and effectually has not been solved.

View Article and Find Full Text PDF

In online convex optimization, adaptive algorithms, which can utilize the second-order information of the loss function's (sub)gradient, have shown improvements over standard gradient methods. This paper presents a framework Follow the Bregman Divergence Leader that unifies various existing adaptive algorithms from which new insights are revealed. Under the proposed framework, two simple adaptive online algorithms with improvable performance guarantee are derived.

View Article and Find Full Text PDF

When the amount of labeled data are limited, semisupervised learning can improve the learner's performance by also using the often easily available unlabeled data. In particular, a popular approach requires the learned function to be smooth on the underlying data manifold. By approximating this manifold as a weighted graph, such graph-based techniques can often achieve state-of-the-art performance.

View Article and Find Full Text PDF

In hierarchical classification, the output labels reside on a tree- or directed acyclic graph (DAG)-structured hierarchy. On testing, the prediction paths of a given test example may be required to end at leaf nodes of the label hierarchy. This is called mandatory leaf node prediction (MLNP) and is particularly useful, when the leaf nodes have much stronger semantic meaning than the internal nodes.

View Article and Find Full Text PDF

Nonparametric kernel learning (NPKL) is a flexible approach to learn the kernel matrix directly without assuming any parametric form. It can be naturally formulated as a semidefinite program (SDP), which, however, is not very scalable. To address this problem, we propose the combined use of low-rank approximation and block coordinate descent (BCD).

View Article and Find Full Text PDF

The Nyström method is an efficient technique for the eigenvalue decomposition of large kernel matrices. However, to ensure an accurate approximation, a sufficient number of columns have to be sampled. On very large data sets, the singular value decomposition (SVD) step on the resultant data submatrix can quickly dominate the computations and become prohibitive.

View Article and Find Full Text PDF

In online learning with kernels, it is vital to control the size (budget) of the support set because of the curse of kernelization. In this paper, we propose two simple and effective stochastic strategies for controlling the budget. Both algorithms have an expected regret that is sublinear in the horizon.

View Article and Find Full Text PDF