Publications by authors named "Chris J Harris"

Multioutput regression of nonlinear and nonstationary data is largely understudied in both machine learning and control communities. This article develops an adaptive multioutput gradient radial basis function (MGRBF) tracker for online modeling of multioutput nonlinear and nonstationary processes. Specifically, a compact MGRBF network is first constructed with a new two-step training procedure to produce excellent predictive capacity.

View Article and Find Full Text PDF

The main challenge for industrial predictive models is how to effectively deal with big data from high-dimensional processes with nonstationary characteristics. Although deep networks, such as the stacked autoencoder (SAE), can learn useful features from massive data with multilevel architecture, it is difficult to adapt them online to track fast time-varying process dynamics. To integrate feature learning and online adaptation, this article proposes a deep cascade gradient radial basis function (GRBF) network for online modeling and prediction of nonlinear and nonstationary processes.

View Article and Find Full Text PDF

A key characteristic of biological systems is the ability to update the memory by learning new knowledge and removing out-of-date knowledge so that intelligent decision can be made based on the relevant knowledge acquired in the memory. Inspired by this fundamental biological principle, this article proposes a multi-output selective ensemble regression (SER) for online identification of multi-output nonlinear time-varying industrial processes. Specifically, an adaptive local learning approach is developed to automatically identify and encode a newly emerging process state by fitting a local multi-output linear model based on the multi-output hypothesis testing.

View Article and Find Full Text PDF

Complex-valued (CV) B-spline neural network approach offers a highly effective means for identifying and inverting practical Hammerstein systems. Compared with its conventional CV polynomial-based counterpart, a CV B-spline neural network has superior performance in identifying and inverting CV Hammerstein systems, while imposing a similar complexity. This paper reviews the optimality of the CV B-spline neural network approach.

View Article and Find Full Text PDF

Many industrial processes contain both linear and nonlinear parts, and kernel principal component analysis (KPCA), widely used in nonlinear process monitoring, may not offer the most effective means for dealing with these nonlinear processes. This paper proposes a new hybrid linear-nonlinear statistical modeling approach for nonlinear process monitoring by closely integrating linear principal component analysis (PCA) and nonlinear KPCA using a serial model structure, which we refer to as serial PCA (SPCA). Specifically, PCA is first applied to extract PCs as linear features, and to decompose the data into the PC subspace and residual subspace (RS).

View Article and Find Full Text PDF

An efficient data based-modeling algorithm for nonlinear system identification is introduced for radial basis function (RBF) neural networks with the aim of maximizing generalization capability based on the concept of leave-one-out (LOO) cross validation. Each of the RBF kernels has its own kernel width parameter and the basic idea is to optimize the multiple pairs of regularization parameters and kernel widths, each of which is associated with a kernel, one at a time within the orthogonal forward regression (OFR) procedure. Thus, each OFR step consists of one model term selection based on the LOO mean square error (LOOMSE), followed by the optimization of the associated kernel width and regularization parameter, also based on the LOOMSE.

View Article and Find Full Text PDF

A two-stage linear-in-the-parameter model construction algorithm is proposed aimed at noisy two-class classification problems. The purpose of the first stage is to produce a prefiltered signal that is used as the desired output for the second stage which constructs a sparse linear-in-the-parameter classifier. The prefiltering stage is a two-level process aimed at maximizing a model's generalization capability, in which a new elastic-net model identification algorithm using singular value decomposition is employed at the lower level, and then, two regularization parameters are optimized using a particle-swarm-optimization algorithm at the upper level by minimizing the leave-one-out (LOO) misclassification rate.

View Article and Find Full Text PDF

A generalized or tunable-kernel model is proposed for probability density function estimation based on an orthogonal forward regression procedure. Each stage of the density estimation process determines a tunable kernel, namely, its center vector and diagonal covariance matrix, by minimizing a leave-one-out test criterion. The kernel mixing weights of the constructed sparse density estimate are finally updated using the multiplicative nonnegative quadratic programming algorithm to ensure the nonnegative and unity constraints, and this weight-updating process additionally has the desired ability to further reduce the model size.

View Article and Find Full Text PDF

An orthogonal forward selection (OFS) algorithm based on leave-one-out (LOO) criteria is proposed for the construction of radial basis function (RBF) networks with tunable nodes. Each stage of the construction process determines an RBF node, namely, its center vector and diagonal covariance matrix, by minimizing the LOO statistics. For regression application, the LOO criterion is chosen to be the LOO mean-square error, while the LOO misclassification rate is adopted in two-class classification application.

View Article and Find Full Text PDF

In this brief, we propose an orthogonal forward regression (OFR) algorithm based on the principles of the branch and bound (BB) and A-optimality experimental design. At each forward regression step, each candidate from a pool of candidate regressors, referred to as S, is evaluated in turn with three possible decisions: 1) one of these is selected and included into the model; 2) some of these remain in S for evaluation in the next forward regression step; and 3) the rest are permanently eliminated from S . Based on the BB principle in combination with an A-optimality composite cost function for model structure determination, a simple adaptive diagnostics test is proposed to determine the decision boundary between 2) and 3).

View Article and Find Full Text PDF

In this paper, we propose a powerful symmetric radial basis function (RBF) classifier for nonlinear detection in the so-called "overloaded" multiple-antenna-aided communication systems. By exploiting the inherent symmetry property of the optimal Bayesian detector, the proposed symmetric RBF classifier is capable of approaching the optimal classification performance using noisy training data. The classifier construction process is robust to the choice of the RBF width and is computationally efficient.

View Article and Find Full Text PDF

Using the classical Parzen window (PW) estimate as the target function, the sparse kernel density estimator is constructed in a forward-constrained regression (FCR) manner. The proposed algorithm selects significant kernels one at a time, while the leave-one-out (LOO) test score is minimized subject to a simple positivity constraint in each forward stage. The model parameter estimation in each forward stage is simply the solution of jackknife parameter estimator for a single parameter, subject to the same positivity constraint check.

View Article and Find Full Text PDF

Many kernel classifier construction algorithms adopt classification accuracy as performance metrics in model evaluation. Moreover, equal weighting is often applied to each data sample in parameter estimation. These modeling practices often become problematic if the data sets are imbalanced.

View Article and Find Full Text PDF

Many signal processing applications pose optimization problems with multimodal and nonsmooth cost functions. Gradient methods are ineffective in these situations, and optimization methods that require no gradient and can achieve a global optimal solution are highly desired to tackle these difficult problems. The paper proposes a guided global search optimization technique, referred to as the repeated weighted boosting search.

View Article and Find Full Text PDF

This paper presents an efficient construction algorithm for obtaining sparse kernel density estimates based on a regression approach that directly optimizes model generalization capability. Computational efficiency of the density construction is ensured using an orthogonal forward regression, and the algorithm incrementally minimizes the leave-one-out test score. A local regularization method is incorporated naturally into the density construction process to further enforce sparsity.

View Article and Find Full Text PDF

The paper introduces an efficient construction algorithm for obtaining sparse linear-in-the-weights regression models based on an approach of directly optimizing model generalization capability. This is achieved by utilizing the delete-1 cross validation concept and the associated leave-one-out test error also known as the predicted residual sums of squares (PRESS) statistic, without resorting to any other validation data set for model evaluation in the model construction process. Computational efficiency is ensured using an orthogonal forward regression, but the algorithm incrementally minimizes the PRESS statistic instead of the usual sum of the squared training errors.

View Article and Find Full Text PDF

A new robust neurofuzzy model construction algorithm has been introduced for the modeling of a priori unknown dynamical systems from observed finite data sets in the form of a set of fuzzy rules. Based on a Takagi-Sugeno (T-S) inference mechanism a one to one mapping between a fuzzy rule base and a model matrix feature subspace is established. This link enables rule based knowledge to be extracted from matrix subspace to enhance model transparency.

View Article and Find Full Text PDF