Localized generalization error model and its application to architecture selection for radial basis function neural network.

IEEE Trans Neural Netw

Media and Life Science, Department of Computer Science and Technology, Shenzhen Graduate School, Harbin Institute of Technology, Shenzhen 518055, China.

Published: September 2007

AI Article Synopsis

Article Abstract

The generalization error bounds found by current error models using the number of effective parameters of a classifier and the number of training samples are usually very loose. These bounds are intended for the entire input space. However, support vector machine (SVM), radial basis function neural network (RBFNN), and multilayer perceptron neural network (MLPNN) are local learning machines for solving problems and treat unseen samples near the training samples to be more important. In this paper, we propose a localized generalization error model which bounds from above the generalization error within a neighborhood of the training samples using stochastic sensitivity measure. It is then used to develop an architecture selection technique for a classifier with maximal coverage of unseen samples by specifying a generalization error threshold. Experiments using 17 University of California at Irvine (UCI) data sets show that, in comparison with cross validation (CV), sequential learning, and two other ad hoc methods, our technique consistently yields the best testing classification accuracy with fewer hidden neurons and less training time.

Download full-text PDF

Source
http://dx.doi.org/10.1109/tnn.2007.894058DOI Listing

Publication Analysis

Top Keywords

generalization error
20
neural network
12
training samples
12
localized generalization
8
error model
8
architecture selection
8
radial basis
8
basis function
8
function neural
8
unseen samples
8

Similar Publications

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!