Recurrently connected networks of spiking neurons underlie the astounding information processing capabilities of the brain. Yet in spite of extensive research, how they can learn through synaptic plasticity to carry out complex network computations remains unclear. We argue that two pieces of this puzzle were provided by experimental data from neuroscience. A mathematical result tells us how these pieces need to be combined to enable biologically plausible online network learning through gradient descent, in particular deep reinforcement learning. This learning method-called e-prop-approaches the performance of backpropagation through time (BPTT), the best-known method for training recurrent neural networks in machine learning. In addition, it suggests a method for powerful on-chip learning in energy-efficient spike-based hardware for artificial intelligence.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7367848PMC
http://dx.doi.org/10.1038/s41467-020-17236-yDOI Listing

Publication Analysis

Top Keywords

networks spiking
8
spiking neurons
8
learning
5
solution learning
4
learning dilemma
4
dilemma recurrent
4
recurrent networks
4
neurons recurrently
4
recurrently connected
4
connected networks
4

Similar Publications

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!