In the construction process of radial basis function (RBF) networks, two common crucial issues arise: the selection of RBF centers and the effective utilization of the given source without encountering the overfitting problem. Another important issue is the fault tolerant capability. That is, when noise or faults exist in a trained network, it is crucial that the network's performance does not undergo significant deterioration or decrease.
View Article and Find Full Text PDFThe concept of randomized neural networks (RNNs), such as the random vector functional link network (RVFL) and extreme learning machine (ELM), is a widely accepted and efficient network method for constructing single-hidden layer feedforward networks (SLFNs). Due to its exceptional approximation capabilities, RNN is being extensively used in various fields. While the RNN concept has shown great promise, its performance can be unpredictable in imperfect conditions, such as weight noises and outliers.
View Article and Find Full Text PDFIEEE Trans Neural Netw Learn Syst
October 2023
Among many k -winners-take-all ( k WTA) models, the dual-neural network (DNN- k WTA) model is with significantly less number of connections. However, for analog realization, noise is inevitable and affects the operational correctness of the k WTA process. Most existing results focus on the effect of additive noise.
View Article and Find Full Text PDFIEEE Trans Neural Netw Learn Syst
October 2024
The dual neural network (DNN)-based k -winner-take-all (WTA) model is able to identify the k largest numbers from its m input numbers. When there are imperfections, such as non-ideal step function and Gaussian input noise, in the realization, the model may not output the correct result. This brief analyzes the influence of the imperfections on the operational correctness of the model.
View Article and Find Full Text PDFIEEE Trans Neural Netw Learn Syst
February 2024
Inspired by sparse learning, the Markowitz mean-variance model with a sparse regularization term is popularly used in sparse portfolio optimization. However, in penalty-based portfolio optimization algorithms, the cardinality level of the resultant portfolio relies on the choice of the regularization parameter. This brief formulates the mean-variance model as a cardinality ( l -norm) constrained nonconvex optimization problem, in which we can explicitly specify the number of assets in the portfolio.
View Article and Find Full Text PDFIEEE Trans Neural Netw Learn Syst
December 2023
Sparse index tracking, as one of the passive investment strategies, is to track a benchmark financial index via constructing a portfolio with a few assets in a market index. It can be considered as parameter learning in an adaptive system, in which we periodically update the selected assets and their investment percentages based on the sliding window approach. However, many existing algorithms for sparse index tracking cannot explicitly and directly control the number of assets or the tracking error.
View Article and Find Full Text PDFIEEE Trans Neural Netw Learn Syst
August 2023
The objective of compressive sampling is to determine a sparse vector from an observation vector. This brief describes an analog neural method to achieve the objective. Unlike previous analog neural models which either resort to the l -norm approximation or are with local convergence only, the proposed method avoids any approximation of the l -norm term and is probably capable of leading to the optimum solution.
View Article and Find Full Text PDFIEEE Trans Neural Netw Learn Syst
May 2023
For decades, adding fault/noise during training by gradient descent has been a technique for getting a neural network (NN) tolerant to persistent fault/noise or getting an NN with better generalization. In recent years, this technique has been readvocated in deep learning to avoid overfitting. Yet, the objective function of such fault/noise injection learning has been misinterpreted as the desired measure (i.
View Article and Find Full Text PDFIEEE Trans Neural Netw Learn Syst
February 2023
From the feature representation's point of view, the feature learning module of a convolutional neural network (CNN) is to transform an input pattern into a feature vector. This feature vector is then multiplied with a number of output weight vectors to produce softmax scores. The common training objective in CNNs is based on the softmax loss, which ignores the intra-class compactness.
View Article and Find Full Text PDFIEEE Trans Neural Netw Learn Syst
July 2022
The dual neural network-based k -winner-take-all (DNN- k WTA) is an analog neural model that is used to identify the k largest numbers from n inputs. Since threshold logic units (TLUs) are key elements in the model, offset voltage drifts in TLUs may affect the operational correctness of a DNN- k WTA network. Previous studies assume that drifts in TLUs follow some particular distributions.
View Article and Find Full Text PDFIEEE Trans Vis Comput Graph
March 2022
Recent methods based on deep learning have shown promise in converting grayscale images to colored ones. However, most of them only allow limited user inputs (no inputs, only global inputs, or only local inputs), to control the output colorful images. The possible difficulty lies in how to differentiate the influences of different inputs.
View Article and Find Full Text PDFOver decades, gradient descent has been applied to develop learning algorithm to train a neural network (NN). In this brief, a limitation of applying such algorithm to train an NN with persistent weight noise is revealed. Let V(w) be the performance measure of an ideal NN.
View Article and Find Full Text PDFIEEE Trans Neural Netw Learn Syst
October 2019
This brief presents analytical results on the effect of additive weight/bias noise on a Boltzmann machine (BM), in which the unit output is in {-1, 1} instead of {0, 1}. With such noise, it is found that the state distribution is yet another Boltzmann distribution but the temperature factor is elevated. Thus, the desired gradient ascent learning algorithm is derived, and the corresponding learning procedure is developed.
View Article and Find Full Text PDFIEEE Trans Neural Netw Learn Syst
September 2018
In this paper, the effect of input noise, output node stochastic, and recurrent state noise on the Wang $k$ WTA is analyzed. Here, we assume that noise exists at the recurrent state $y(t)$ and it can either be additive or multiplicative. Besides, its dynamical change (i.
View Article and Find Full Text PDFThe original Summed Area Table (SAT) structure is designed for handling 2D rectangular data. Due to the nature of spherical functions, the SAT structure cannot handle cube maps directly. This paper proposes a new SAT structure for cube maps and develops the corresponding lookup algorithm.
View Article and Find Full Text PDFIEEE Trans Neural Netw Learn Syst
August 2018
A commonly used measurement model for locating a mobile source is time-difference-of-arrival (TDOA). As each TDOA measurement defines a hyperbola, it is not straightforward to compute the mobile source position due to the nonlinear relationship in the measurements. This brief exploits the Lagrange programming neural network (LPNN), which provides a general framework to solve nonlinear constrained optimization problems, for the TDOA-based localization.
View Article and Find Full Text PDFIEEE Trans Neural Netw Learn Syst
August 2018
In the training stage of radial basis function (RBF) networks, we need to select some suitable RBF centers first. However, many existing center selection algorithms were designed for the fault-free situation. This brief develops a fault tolerant algorithm that trains an RBF network and selects the RBF centers simultaneously.
View Article and Find Full Text PDFIEEE Trans Neural Netw Learn Syst
April 2018
This paper studies the effects of uniform input noise and Gaussian input noise on the dual neural network-based WTA (DNN- WTA) model. We show that the state of the network (under either uniform input noise or Gaussian input noise) converges to one of the equilibrium points. We then derive a formula to check if the network produce correct outputs or not.
View Article and Find Full Text PDFIEEE Trans Neural Netw Learn Syst
June 2017
Many existing results on fault-tolerant algorithms focus on the single fault source situation, where a trained network is affected by one kind of weight failure. In fact, a trained network may be affected by multiple kinds of weight failure. This paper first studies how the open weight fault and the multiplicative weight noise degrade the performance of radial basis function (RBF) networks.
View Article and Find Full Text PDFIEEE Trans Neural Netw Learn Syst
October 2017
The major limitation of the Lagrange programming neural network (LPNN) approach is that the objective function and the constraints should be twice differentiable. Since sparse approximation involves nondifferentiable functions, the original LPNN approach is not suitable for recovering sparse signals. This paper proposes a new formulation of the LPNN approach based on the concept of the locally competitive algorithm (LCA).
View Article and Find Full Text PDFIEEE Trans Neural Netw Learn Syst
April 2016
Fault tolerance is one interesting property of artificial neural networks. However, the existing fault models are able to describe limited node fault situations only, such as stuck-at-zero and stuck-at-one. There is no general model that is able to describe a large class of node fault situations.
View Article and Find Full Text PDFIEEE Trans Vis Comput Graph
August 2015
Many existing pre-computed radiance transfer (PRT) approaches for all-frequency lighting store the information of a 3D object in the pre-vertex manner. To preserve the fidelity of high frequency effects, the 3D object must be tessellated densely. Otherwise, rendering artifacts due to interpolation may appear.
View Article and Find Full Text PDFIEEE Trans Neural Netw Learn Syst
September 2015
The dual neural network (DNN)-based k -winner-take-all ( k WTA) model is an effective approach for finding the k largest inputs from n inputs. Its major assumption is that the threshold logic units (TLUs) can be implemented in a perfect way. However, when differential bipolar pairs are used for implementing TLUs, the transfer function of TLUs is a logistic function.
View Article and Find Full Text PDFIEEE Trans Neural Netw Learn Syst
September 2013
Recently, an analog neural network model, namely Wang's kWTA, was proposed. In this model, the output nodes are defined as the Heaviside function. Subsequently, its finite time convergence property and the exact convergence time are analyzed.
View Article and Find Full Text PDFIEEE Trans Neural Netw Learn Syst
November 2012
Injecting weight noise during training is a simple technique that has been proposed for almost two decades. However, little is known about its convergence behavior. This paper studies the convergence of two weight noise injection-based training algorithms, multiplicative weight noise injection with weight decay and additive weight noise injection with weight decay.
View Article and Find Full Text PDF