Publications by authors named "John Sum"

The dual neural network (DNN)-based k -winner-take-all (WTA) model is able to identify the k largest numbers from its m input numbers. When there are imperfections, such as non-ideal step function and Gaussian input noise, in the realization, the model may not output the correct result. This brief analyzes the influence of the imperfections on the operational correctness of the model.

View Article and Find Full Text PDF

The objective of compressive sampling is to determine a sparse vector from an observation vector. This brief describes an analog neural method to achieve the objective. Unlike previous analog neural models which either resort to the l -norm approximation or are with local convergence only, the proposed method avoids any approximation of the l -norm term and is probably capable of leading to the optimum solution.

View Article and Find Full Text PDF

For decades, adding fault/noise during training by gradient descent has been a technique for getting a neural network (NN) tolerant to persistent fault/noise or getting an NN with better generalization. In recent years, this technique has been readvocated in deep learning to avoid overfitting. Yet, the objective function of such fault/noise injection learning has been misinterpreted as the desired measure (i.

View Article and Find Full Text PDF

The dual neural network-based k -winner-take-all (DNN- k WTA) is an analog neural model that is used to identify the k largest numbers from n inputs. Since threshold logic units (TLUs) are key elements in the model, offset voltage drifts in TLUs may affect the operational correctness of a DNN- k WTA network. Previous studies assume that drifts in TLUs follow some particular distributions.

View Article and Find Full Text PDF

Over decades, gradient descent has been applied to develop learning algorithm to train a neural network (NN). In this brief, a limitation of applying such algorithm to train an NN with persistent weight noise is revealed. Let V(w) be the performance measure of an ideal NN.

View Article and Find Full Text PDF

This brief presents analytical results on the effect of additive weight/bias noise on a Boltzmann machine (BM), in which the unit output is in {-1, 1} instead of {0, 1}. With such noise, it is found that the state distribution is yet another Boltzmann distribution but the temperature factor is elevated. Thus, the desired gradient ascent learning algorithm is derived, and the corresponding learning procedure is developed.

View Article and Find Full Text PDF

In this paper, the effect of input noise, output node stochastic, and recurrent state noise on the Wang $k$ WTA is analyzed. Here, we assume that noise exists at the recurrent state $y(t)$ and it can either be additive or multiplicative. Besides, its dynamical change (i.

View Article and Find Full Text PDF

This paper studies the effects of uniform input noise and Gaussian input noise on the dual neural network-based WTA (DNN- WTA) model. We show that the state of the network (under either uniform input noise or Gaussian input noise) converges to one of the equilibrium points. We then derive a formula to check if the network produce correct outputs or not.

View Article and Find Full Text PDF

Fault tolerance is one interesting property of artificial neural networks. However, the existing fault models are able to describe limited node fault situations only, such as stuck-at-zero and stuck-at-one. There is no general model that is able to describe a large class of node fault situations.

View Article and Find Full Text PDF

This case report concerns a 16-year-old girl with a 9.92 Mb, heterozygous interstitial chromosome deletion at 7q33-q35, identified using array comparative genomic hybridization. The patient has dysmorphic facial features, intellectual disability, recurrent infections, self-injurious behavior, obesity, and recent onset of hemihypertrophy.

View Article and Find Full Text PDF

Pelizaeus-Merzbacher disease (PMD) is neurodegenerative leukodystrophy caused by dysfunction of the proteolipid protein 1 (PLP1) gene on Xq22, which codes for an essential myelin protein. As an X-linked condition, PMD primarily affects males; however there have been a small number of affected females reported in the medical literature with a variety of different mutations in this gene. No affected females to date have a deletion like our patient.

View Article and Find Full Text PDF

The dual neural network (DNN)-based k -winner-take-all ( k WTA) model is an effective approach for finding the k largest inputs from n inputs. Its major assumption is that the threshold logic units (TLUs) can be implemented in a perfect way. However, when differential bipolar pairs are used for implementing TLUs, the transfer function of TLUs is a logistic function.

View Article and Find Full Text PDF

Recently, an analog neural network model, namely Wang's kWTA, was proposed. In this model, the output nodes are defined as the Heaviside function. Subsequently, its finite time convergence property and the exact convergence time are analyzed.

View Article and Find Full Text PDF

Several recent reports of interstitial deletions at the terminal end of the short arm of chromosome 3 have helped to define the critical region whose deletion causes 3p deletion syndrome. We report on an 11-year-old girl with intellectual disability, obsessive-compulsive tendencies, hypotonia, and dysmorphic facial features in whom a 684 kb interstitial 3p25.3 deletion was characterized using array-CGH.

View Article and Find Full Text PDF

Injecting weight noise during training is a simple technique that has been proposed for almost two decades. However, little is known about its convergence behavior. This paper studies the convergence of two weight noise injection-based training algorithms, multiplicative weight noise injection with weight decay and additive weight noise injection with weight decay.

View Article and Find Full Text PDF

Fault tolerance is an interesting topic in neural networks. However, many existing results on this topic focus only on the situation of a single fault source. In fact, a trained network may be affected by multiple fault sources.

View Article and Find Full Text PDF

A k-winner-take-all (kWTA) network is able to find out the k largest numbers from n inputs. Recently, a dual neural network (DNN) approach was proposed to implement the kWTA process. Compared to the conventional approach, the DNN approach has much less number of interconnections.

View Article and Find Full Text PDF

Improving fault tolerance of a neural network has been studied for more than two decades. Various training algorithms have been proposed in sequel. The on-line node fault injection-based algorithm is one of these algorithms, in which hidden nodes randomly output zeros during training.

View Article and Find Full Text PDF

Injecting weight noise during training has been a simple strategy to improve the fault tolerance of multilayer perceptrons (MLPs) for almost two decades, and several online training algorithms have been proposed in this regard. However, there are some misconceptions about the objective functions being minimized by these algorithms. Some existing results misinterpret that the prediction error of a trained MLP affected by weight noise is equivalent to the objective function of a weight noise injection algorithm.

View Article and Find Full Text PDF

The weight-decay technique is an effective approach to handle overfitting and weight fault. For fault-free networks, without an appropriate value of decay parameter, the trained network is either overfitted or underfitted. However, many existing results on the selection of decay parameter focus on fault-free networks only.

View Article and Find Full Text PDF

In the last two decades, many online fault/noise injection algorithms have been developed to attain a fault tolerant neural network. However, not much theoretical works related to their convergence and objective functions have been reported. This paper studies six common fault/noise-injection-based online learning algorithms for radial basis function (RBF) networks, namely 1) injecting additive input noise, 2) injecting additive/multiplicative weight noise, 3) injecting multiplicative node noise, 4) injecting multiweight fault (random disconnection of weights), 5) injecting multinode fault during training, and 6) weight decay with injecting multinode fault.

View Article and Find Full Text PDF

In this paper, an objective function for training a functional link network to tolerate multiplicative weight noise is presented. Basically, the objective function is similar in form to other regularizer-based functions that consist of a mean square training error term and a regularizer term. Our study shows that under some mild conditions the derived regularizer is essentially the same as a weight decay regularizer.

View Article and Find Full Text PDF

In classical training methods for node open fault, we need to consider many potential faulty networks. When the multinode fault situation is considered, the space of potential faulty networks is very large. Hence, the objective function and the corresponding learning algorithm would be computationally complicated.

View Article and Find Full Text PDF