We provide a radically elementary proof of the universal approximation property of the one-hidden layer perceptron based on the Taylor expansion and the Vandermonde determinant. It works for both L(q) and uniform approximation on compact sets. This approach naturally yields some bounds for the design of the hidden layer and convergence results (including some rates) for the derivatives. A partial answer to Hornik's conjecture on the universality of the bias is proposed. An extension to vector valued functions is also carried out. Copyright 1997 Elsevier Science Ltd.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1016/s0893-6080(97)00010-5 | DOI Listing |
Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!