Prior language input is not lost but integrated with the current input. This principle is demonstrated by "reservoir computing": Untrained recurrent neural networks project input sequences onto a random point in high-dimensional state space. Earlier inputs can be retrieved from this projection, albeit less reliably so as more input is received. The bottleneck is therefore not "Now-or-Never" but "Sooner-is-Better."
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1017/S0140525X15000783 | DOI Listing |
Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!