Fault tolerance is an interesting topic in neural networks. However, many existing results on this topic focus only on the situation of a single fault source. In fact, a trained network may be affected by multiple fault sources. This brief studies the performance of faulty radial basis function (RBF) networks that suffer from multiplicative weight noise and open weight fault concurrently. We derive a mean prediction error (MPE) formula to estimate the generalization ability of faulty networks. The MPE formula provides us a way to understand the generalization ability of faulty networks without using a test set or generating a number of potential faulty networks. Based on the MPE result, we propose methods to optimize the regularization parameter, as well as the RBF width.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1109/TNNLS.2012.2196054 | DOI Listing |
Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!