Publications by authors named "Zhiquan Qi"

Learning from label proportions (LLP) is a widespread and important learning paradigm: only the bag-level proportional information of the grouped training instances is available for the classification task, instead of the instance-level labels in the fully supervised scenario. As a result, LLP is a typical weakly supervised learning protocol and commonly exists in privacy protection circumstances due to the sensitivity in label information for real-world applications. In general, it is less laborious and more efficient to collect label proportions as the bag-level supervised information than the instance-level one.

View Article and Find Full Text PDF

Numerous detection problems in computer vision, including road crack detection, suffer from exceedingly foreground-background imbalance. Fortunately, modification of loss function appears to solve this puzzle once and for all. In this article, we propose a pixel-based adaptive weighted cross-entropy (WCE) loss in conjunction with Jaccard distance to facilitate high-quality pixel-level road crack detection.

View Article and Find Full Text PDF

Painting style transfer is an attractive and challenging computer vision problem that aims to transfer painting styles onto natural images. Existing advanced methods tackle this problem from the perspective of Neural Style Transfer (NST) or unsupervised cross-domain image translation. For both two types of methods, attention has been focused on reproducing artistic painting styles of representative artists (e.

View Article and Find Full Text PDF

In recent years, deep-based models have achieved great success in the field of single image super-resolution (SISR), where tremendous parameters are always needed to obtain a satisfying performance. However, the high computational complexity extremely limits its applications to some mobile devices that possess less computing and storage resources. To address this problem, in this paper, we propose a flexibly adjustable super lightweight SR network: s-LWSR.

View Article and Find Full Text PDF

Learning from label proportions (LLP), where the training data is in form of bags, and only the proportions of classes in each bag are available, has attracted wide interest in machine learning community. In general, most LLP algorithms adopt random sampling to obtain the proportional information of different categories, which correspondingly obtains some labeled samples in each bag. However, LLP training process always fails to leverage these labeled samples, which may contain essential data distribution information.

View Article and Find Full Text PDF

Learning from label proportions (LLP), in which the training data is in the form of bags and only the proportion of each class in each bag is available, has attracted wide interest in machine learning. However, how to solve high-dimensional LLP problem is still a challenging task. In this paper, we propose a novel algorithm called learning from label proportions based on random forests (LLP-RF), which has the advantage of dealing with high-dimensional LLP problem.

View Article and Find Full Text PDF

How to solve the classification problem with only label proportions has recently drawn increasing attention in the machine learning field. In this paper, we propose an ensemble learning strategy to deal with the learning problem with label proportions (LLP). In detail, we first give a loss function based on different weights for LLP, and then construct the corresponding weak classifier, at the same time, estimate its conditional probabilities by a standard logistic function.

View Article and Find Full Text PDF

Vision-based vehicle detection is an important issue for advanced driver assistance systems. In this paper, we presented an improved multi-vehicle detection and tracking method using cascade Adaboost and Adaptive Kalman filter(AKF) with target identity awareness. A cascade Adaboost classifier using Haar-like features was built for vehicle detection, followed by a more comprehensive verification process which could refine the vehicle hypothesis in terms of both location and dimension.

View Article and Find Full Text PDF

Clustering has been widely used in data analysis. A majority of existing clustering approaches assume that the number of clusters is given in advance. Recently, a novel clustering framework is proposed which can automatically learn the number of clusters from training data.

View Article and Find Full Text PDF

Recently, learning from label proportions (LLPs), which seeks generalized instance-level predictors merely based on bag-level label proportions, has attracted widespread interest. However, due to its weak label scenario, LLP usually falls into a transductive learning framework accounting for an intractable combinatorial optimization issue. In this paper, we propose a brand new algorithm, called LLPs via nonparallel support vector machine (LLP-NPSVM), to facilitate this dilemma.

View Article and Find Full Text PDF

Semisupervised learning (SSL) problem, which makes use of both a large amount of cheap unlabeled data and a few unlabeled data for training, in the last few years, has attracted amounts of attention in machine learning and data mining. Exploiting the manifold regularization (MR), Belkin et al. proposed a new semisupervised classification algorithm: Laplacian support vector machines (LapSVMs), and have shown the state-of-the-art performance in SSL field.

View Article and Find Full Text PDF

We propose a novel nonparallel classifier, called nonparallel support vector machine (NPSVM), for binary classification. Our NPSVM that is fully different from the existing nonparallel classifiers, such as the generalized eigenvalue proximal support vector machine (GEPSVM) and the twin support vector machine (TWSVM), has several incomparable advantages: 1) two primal problems are constructed implementing the structural risk minimization principle; 2) the dual problems of these two primal problems have the same advantages as that of the standard SVMs, so that the kernel trick can be applied directly, while existing TWSVMs have to construct another two primal problems for nonlinear cases based on the approximate kernel-generated surfaces, furthermore, their nonlinear problems cannot degenerate to the linear case even the linear kernel is used; 3) the dual problems have the same elegant formulation with that of standard SVMs and can certainly be solved efficiently by sequential minimization optimization algorithm, while existing GEPSVM or TWSVMs are not suitable for large scale problems; 4) it has the inherent sparseness as standard SVMs; 5) existing TWSVMs are only the special cases of the NPSVM when the parameters of which are appropriately chosen. Experimental results on lots of datasets show the effectiveness of our method in both sparseness and classification accuracy, and therefore, confirm the above conclusion further.

View Article and Find Full Text PDF

The Universum, which is defined as the sample not belonging to either class of the classification problem of interest, has been proved to be helpful in supervised learning. In this work, we designed a new Twin Support Vector Machine with Universum (called U-TSVM), which can utilize Universum data to improve the classification performance of TSVM. Unlike U-SVM, in U-TSVM, Universum data are located in a nonparallel insensitive loss tube by using two Hinge Loss functions, which can exploit these prior knowledge embedded in Universum data more flexible.

View Article and Find Full Text PDF

Semi-supervised learning has attracted a great deal of attention in machine learning and data mining. In this paper, we have proposed a novel Laplacian Twin Support Vector Machine (called Lap-TSVM) for the semi-supervised classification problem, which can exploit the geometry information of the marginal distribution embedded in unlabeled data to construct a more reasonable classifier and be a useful extension of TSVM. Furthermore, by choosing appropriate parameters, Lap-TSVM degenerates to either TSVM or TBSVM.

View Article and Find Full Text PDF