On the optimality of neural-network approximation using incremental algorithms.

IEEE Trans Neural Netw

Department of Electrical Engineering, Technion, Haifa 32000, Israel.

Published: June 2010

The problem of approximating functions by neural networks using incremental algorithms is studied. For functions belonging to a rather general class, characterized by certain smoothness properties with respect to the L2 norm, we compute upper bounds on the approximation error where error is measured by the Lq norm, 1< or =q< or =infinity. These results extend previous work, applicable in the case q=2, and provide an explicit algorithm to achieve the derived approximation error rate. In the range q< or =2 near-optimal rates of convergence are demonstrated. A gap remains, however, with respect to a recently established lower bound in the case q>2, although the rates achieved are provably better than those obtained by optimal linear approximation. Extensions of the results from the L2 norm to Lp are also discussed. A further interesting conclusion from our results is that no loss of generality is suffered using networks with positive hidden-to-output weights. Moreover, explicit bounds on the size of the hidden-to-output weights are established, which are sufficient to guarantee the established convergence rates.

Download full-text PDF

Source
http://dx.doi.org/10.1109/72.839004DOI Listing

Publication Analysis

Top Keywords

incremental algorithms
8
approximation error
8
hidden-to-output weights
8
optimality neural-network
4
approximation
4
neural-network approximation
4
approximation incremental
4
algorithms problem
4
problem approximating
4
approximating functions
4

Similar Publications

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!