Multilayer perceptron networks whose outputs consist of affine combinations of hidden units using the tanh activation function are universal function approximators and are used for regression, typically by reducing the MSE with backpropagation. We present a neural network weight learning algorithm that directly positions the hidden units within input space by numerically analyzing the curvature of the output surface. Our results show that under some sampling requirements, this method can reliably recover the parameters of a neural network used to generate a data set.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1016/j.neunet.2011.01.006 | DOI Listing |
Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!