The back-propagation method encounters two problems in practice, i.e., slow learning progress and convergence to a false local minimum. The present study addresses the latter problem and proposes a modified back-propagation method. The basic idea of the method is to keep the sigmoid derivative relatively large while some of the error signals are large. For this purpose, each connecting weight in a network is multiplied by a factor in the range of (0,1], at a constant interval during a learning process. Results of numerical experiments substantiate the validity of the method.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1016/s0893-6080(98)00087-2 | DOI Listing |
Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!