Persistence is one of the most common characteristics of real-world time series. In this work we investigate the process of learning persistent dynamics by neural networks. We show that for chaotic times series the network can get stuck for long training periods in a trivial minimum of the error function related to the long-term autocorrelation in the series. Remarkably, in these cases the transition to the trained phase is quite abrupt. For noisy dynamics the training process is smooth. We also consider the effectiveness of two of the most frequently used decorrelation methods in avoiding the problems related to persistence. Copyright 1997 Elsevier Science Ltd.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1016/s0893-6080(97)00091-9 | DOI Listing |
Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!