In a real brain, the act of perception is a bidirectional process, depending on both feedforward sensory pathways and feedback pathways that carry expectations. We are interested in how such a neural network might emerge from a biologically plausible learning rule. Other neural network learning methods either only apply to feedforward networks, or employ assumptions (such as weight copying) that render them unlikely in a real brain. Predictive estimators (PEs) offer a better solution to this bidirectional learning scenario. However, PEs also depend on weight copying. In this paper, we propose the symmetric PE (SPE), an architecture that can learn both feedforward and feedback connection weights individually using only locally available information. We demonstrate that the SPE can learn complicated mappings without the use of weight copying. The SPE networks also show promise in deeper architectures.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1109/TNNLS.2017.2756859 | DOI Listing |
Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!