Fault tolerance is an interesting topic in neural networks. However, many existing results on this topic focus only on the situation of a single fault source. In fact, a trained network may be affected by multiple fault sources.
View Article and Find Full Text PDFIEEE Trans Neural Netw Learn Syst
April 2012
A k-winner-take-all (kWTA) network is able to find out the k largest numbers from n inputs. Recently, a dual neural network (DNN) approach was proposed to implement the kWTA process. Compared to the conventional approach, the DNN approach has much less number of interconnections.
View Article and Find Full Text PDFIEEE Trans Neural Netw Learn Syst
February 2012
Improving fault tolerance of a neural network has been studied for more than two decades. Various training algorithms have been proposed in sequel. The on-line node fault injection-based algorithm is one of these algorithms, in which hidden nodes randomly output zeros during training.
View Article and Find Full Text PDFIn this paper, an objective function for training a functional link network to tolerate multiplicative weight noise is presented. Basically, the objective function is similar in form to other regularizer-based functions that consist of a mean square training error term and a regularizer term. Our study shows that under some mild conditions the derived regularizer is essentially the same as a weight decay regularizer.
View Article and Find Full Text PDFIn classical training methods for node open fault, we need to consider many potential faulty networks. When the multinode fault situation is considered, the space of potential faulty networks is very large. Hence, the objective function and the corresponding learning algorithm would be computationally complicated.
View Article and Find Full Text PDF