Back-Propagation Learning in Deep Spike-By-Spike Networks.

Front Comput Neurosci

Institute for Theoretical Physics, University of Bremen, Bremen, Germany.

Published: August 2019

Artificial neural networks (ANNs) are important building blocks in technical applications. They rely on noiseless continuous signals in stark contrast to the discrete action potentials stochastically exchanged among the neurons in real brains. We propose to bridge this gap with Spike-by-Spike (SbS) networks which represent a compromise between non-spiking and spiking versions of generative models. What is missing, however, are algorithms for finding weight sets that would optimize the output performances of deep SbS networks with many layers. Here, a learning rule for feed-forward SbS networks is derived. The properties of this approach are investigated and its functionality is demonstrated by simulations. In particular, a Deep Convolutional SbS network for classifying handwritten digits achieves a classification performance of roughly 99.3% on the MNIST test data when the learning rule is applied together with an optimizer. Thereby it approaches the benchmark results of ANNs without extensive parameter optimization. We envision this learning rule for SBS networks to provide a new basis for research in neuroscience and for technical applications, especially when they become implemented on specialized computational hardware.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6700320PMC
http://dx.doi.org/10.3389/fncom.2019.00055DOI Listing

Publication Analysis

Top Keywords

sbs networks
16
learning rule
12
technical applications
8
networks
6
sbs
5
back-propagation learning
4
learning deep
4
deep spike-by-spike
4
spike-by-spike networks
4
networks artificial
4

Similar Publications

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!