Neuronal-Plasticity and Reward-Propagation Improved Recurrent Spiking Neural Networks.

Front Neurosci

Research Center for Brain-Inspired Intelligence, Institute of Automation, Chinese Academy of Sciences (CASIA), Beijing, China.

Published: March 2021

Different types of dynamics and plasticity principles found through natural neural networks have been well-applied on Spiking neural networks (SNNs) because of their biologically-plausible efficient and robust computations compared to their counterpart deep neural networks (DNNs). Here, we further propose a special Neuronal-plasticity and Reward-propagation improved Recurrent SNN (NRR-SNN). The historically-related adaptive threshold with two channels is highlighted as important neuronal plasticity for increasing the neuronal dynamics, and then global labels instead of errors are used as a reward for the paralleling gradient propagation. Besides, a recurrent loop with proper sparseness is designed for robust computation. Higher accuracy and stronger robust computation are achieved on two sequential datasets (i.e., TIDigits and TIMIT datasets), which to some extent, shows the power of the proposed NRR-SNN with biologically-plausible improvements.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7994752PMC
http://dx.doi.org/10.3389/fnins.2021.654786DOI Listing

Publication Analysis

Top Keywords

neural networks
16
neuronal-plasticity reward-propagation
8
reward-propagation improved
8
improved recurrent
8
spiking neural
8
robust computation
8
recurrent spiking
4
neural
4
networks
4
networks types
4

Similar Publications

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!