We consider a kind of kernel-based regression with general convex loss functions in a regularization scheme. The kernels used in the scheme are not necessarily symmetric and thus are not positive semidefinite; l(1)-norm of the coefficients in the kernel ensembles is taken as the regularizer. Our setting in this letter is quite different from the classical regularized regression algorithms such as regularized networks and support vector machines regression. Under an established error decomposition that consists of approximation error, hypothesis error, and sample error, we present a detailed mathematical analysis for this scheme and, in particular, its learning rate. A reweighted empirical process theory is applied to the analysis of produced learning algorithms, which plays a key role in deriving the explicit learning rate under some assumptions.

Download full-text PDF

Source
http://dx.doi.org/10.1162/NECO_a_00535DOI Listing

Publication Analysis

Top Keywords

convex loss
8
learning rate
8
learning
4
learning convex
4
loss indefinite
4
indefinite kernels
4
kernels consider
4
consider kind
4
kind kernel-based
4
kernel-based regression
4

Similar Publications

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!