On a natural homotopy between linear and nonlinear single-layer networks.

IEEE Trans Neural Netw

Dept. of Electr. and Comput. Eng., Carnegie Mellon Univ., Pittsburgh, PA.

Published: October 2012

In this paper we formulate a homotopy approach for solving for the weights of a network by smoothly transforming a linear single layer network into a nonlinear perceptron network. While other researchers have reported potentially useful numerical results based on heuristics related to this approach, the work presented here provides the first rigorous exposition of the deformation process. Results include a complete description of how the weights relate to the data space, a proof of the global convergence and validity of the method, and a rigorous formulation of the generalized orthogonality theorem to provide a geometric perspective of the solution process. This geometric interpretation clarifies conditions resulting in the appearance of local minima and infinite weights in network optimization procedures, and the similarities of and differences between optimizing the weights in a nonlinear network and optimizing the weights in a linear network. The results provide a strong theoretical foundation for quantifying performance bounds on finite neural networks and for constructing globally convergent optimization approaches on finite data sets.

Download full-text PDF

Source
http://dx.doi.org/10.1109/72.485634DOI Listing

Publication Analysis

Top Keywords

weights network
8
optimizing weights
8
network
6
weights
5
natural homotopy
4
homotopy linear
4
linear nonlinear
4
nonlinear single-layer
4
single-layer networks
4
networks paper
4

Similar Publications

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!