Publications by authors named "Fanhua Shang"

Robust face clustering enjoys a wide range of applications for gate passes, surveillance systems and security analysis in embedded sensors. Nevertheless, existing algorithms have limitations in finding accurate clusters when data contain noise (e.g.

View Article and Find Full Text PDF

Many research works have shown that the traditional alternating direction multiplier methods (ADMMs) can be better understood by continuous-time differential equations (DEs). On the other hand, many unfolded algorithms directly inherit the traditional iterations to build deep networks. Although they achieve superior practical performance and a faster convergence rate than traditional counterparts, there is a lack of clear insight into unfolded network structures.

View Article and Find Full Text PDF

Deep neural networks (DNNs) play key roles in various artificial intelligence applications such as image classification and object recognition. However, a growing number of studies have shown that there exist adversarial examples in DNNs, which are almost imperceptibly different from the original samples but can greatly change the output of DNNs. Recently, many white-box attack algorithms have been proposed, and most of the algorithms concentrate on how to make the best use of gradients per iteration to improve adversarial performance.

View Article and Find Full Text PDF

Panoramic videos are shot by an omnidirectional camera or a collection of cameras, and can display a view in every direction. They can provide viewers with an immersive feeling. The study of super-resolution of panoramic videos has attracted much attention, and many methods have been proposed, especially deep learning-based methods.

View Article and Find Full Text PDF

Model quantization can reduce the model size and computational latency, it has been successfully applied for many applications of mobile phones, embedded devices, and smart chips. Mixed-precision quantization models can match different bit precision according to the sensitivity of different layers to achieve great performance. However, it is difficult to quickly determine the quantization bit precision of each layer in deep neural networks under some constraints (for example, hardware resources, energy consumption, model size, and computational latency).

View Article and Find Full Text PDF

Recently, stochastic hard thresholding (HT) optimization methods [e.g., stochastic variance reduced gradient hard thresholding (SVRGHT)] are becoming more attractive for solving large-scale sparsity/rank-constrained problems.

View Article and Find Full Text PDF

In recent years, a series of matching pursuit and hard thresholding algorithms have been proposed to solve the sparse representation problem with ℓ0-norm constraint. In addition, some stochastic hard thresholding methods were also proposed, such as stochastic gradient hard thresholding (SG-HT) and stochastic variance reduced gradient hard thresholding (SVRGHT). However, each iteration of all the algorithms requires one hard thresholding operation, which leads to a high per-iteration complexity and slow convergence, especially for high-dimensional problems.

View Article and Find Full Text PDF

In recent years, most of the studies have shown that the generalized iterated shrinkage thresholdings (GISTs) have become the commonly used first-order optimization algorithms in sparse learning problems. The nonconvex relaxations of the l -norm usually achieve better performance than the convex case (e.g.

View Article and Find Full Text PDF

Recently, many stochastic variance reduced alternating direction methods of multipliers (ADMMs) (e.g., SAG-ADMM and SVRG-ADMM) have made exciting progress such as linear convergence rate for strongly convex (SC) problems.

View Article and Find Full Text PDF

Semi-supervised non-negative matrix factorization (NMF) exploits the strengths of NMF in effectively learning local information contained in data and is also able to achieve effective learning when only a small fraction of data is labeled. NMF is particularly useful for dimensionality reduction of high-dimensional data. However, the mapping between the low-dimensional representation, learned by semi-supervised NMF, and the original high-dimensional data contains complex hierarchical and structural information, which is hard to extract by using only single-layer clustering methods.

View Article and Find Full Text PDF

In this article, a new deep neural network based on sparse filtering and manifold regularization (DSMR) is proposed for feature extraction and classification of polarimetric synthetic aperture radar (PolSAR) data. DSMR uses a novel deep neural network (DNN) to automatically learn features from raw SAR data. During preprocessing, the spatial information between pixels on PolSAR images is exploited to weight each data sample.

View Article and Find Full Text PDF

Recently, nuclear norm-based low rank representation (LRR) methods have been popular in several applications, such as subspace segmentation. However, there exist two limitations: one is that nuclear norm as the relaxation of rank function will lead to the suboptimal solution since nuclear norm-based minimization subproblem tends to the over-relaxations of singular value elements and treats each of them equally; the other is that solving LRR problems may cause more time consumption due to involving singular value decomposition of the large scale matrix at each iteration. To overcome both disadvantages, this paper mainly considers two tractable variants of LRR: one is Schatten-p norm minimization-based LRR (i.

View Article and Find Full Text PDF

The heavy-tailed distributions of corrupted outliers and singular values of all channels in low-level vision have proven effective priors for many applications such as background modeling, photometric stereo and image alignment. And they can be well modeled by a hyper-Laplacian. However, the use of such distributions generally leads to challenging non-convex, non-smooth and non-Lipschitz problems, and makes existing algorithms very slow for large-scale applications.

View Article and Find Full Text PDF

Low-rank tensor completion (LRTC) has successfully been applied to a wide range of real-world problems. Despite the broad, successful applications, existing LRTC methods may become very slow or even not applicable for large-scale problems. To address this issue, a novel core tensor trace-norm minimization (CTNM) method is proposed for simultaneous tensor learning and decomposition, and has a much lower computational complexity.

View Article and Find Full Text PDF

In recent years, low-rank tensor completion (LRTC) problems have received a significant amount of attention in computer vision, data mining, and signal processing. The existing trace norm minimization algorithms for iteratively solving LRTC problems involve multiple singular value decompositions of very large matrices at each iteration. Therefore, they suffer from high computational cost.

View Article and Find Full Text PDF

In recent years, matrix rank minimization problems have aroused considerable interests from machine learning, data mining and computer vision communities. All of these problems can be solved via their convex relaxations which minimize the trace norm instead of the rank of the matrix, and have to be solved iteratively and involve singular value decomposition (SVD) at each iteration. Therefore, those algorithms for trace norm minimization problems suffer from high computation cost of multiple SVDs.

View Article and Find Full Text PDF