Difference between memory and prediction in linear recurrent networks.

Phys Rev E

Department of Physics, Physics of Living Systems, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139, USA.

Published: September 2017

Recurrent networks are trained to memorize their input better, often in the hopes that such training will increase the ability of the network to predict. We show that networks designed to memorize input can be arbitrarily bad at prediction. We also find, for several types of inputs, that one-node networks optimized for prediction are nearly at upper bounds on predictive capacity given by Wiener filters and are roughly equivalent in performance to randomly generated five-node networks. Our results suggest that maximizing memory capacity leads to very different networks than maximizing predictive capacity and that optimizing recurrent weights can decrease reservoir size by half an order of magnitude.

Download full-text PDF

Source
http://dx.doi.org/10.1103/PhysRevE.96.032308DOI Listing

Publication Analysis

Top Keywords

recurrent networks
8
memorize input
8
predictive capacity
8
networks maximizing
8
networks
6
difference memory
4
memory prediction
4
prediction linear
4
linear recurrent
4
networks recurrent
4

Similar Publications

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!