AI Article Synopsis

  • * The proposed calcium-gated bipolar leaky integrate and fire (Ca-LIF) neuron model aims to closely replicate the behavior of ReLU neurons common in ANNs, simplifying the conversion process.
  • * Their new quantization-aware training (QAT) framework allows for direct export of ANN weights to SNNs without additional processing, resulting in competitive accuracy and shorter inference times across various deep network architectures.

Article Abstract

Spiking neural networks (SNNs) have attracted intensive attention due to the efficient event-driven computing paradigm. Among SNN training methods, the ANN-to-SNN conversion is usually regarded to achieve state-of-the-art recognition accuracies. However, many existing ANN-to-SNN techniques impose lengthy post-conversion steps like threshold balancing and weight renormalization, to compensate for the inherent behavioral discrepancy between artificial and spiking neurons. In addition, they require a long temporal window to encode and process as many spikes as possible to better approximate the real-valued ANN neurons, leading to a high inference latency. To overcome these challenges, we propose a calcium-gated bipolar leaky integrate and fire (Ca-LIF) spiking neuron model to better approximate the functions of the ReLU neurons widely adopted in ANNs. We also propose a quantization-aware training (QAT)-based framework leveraging an off-the-shelf QAT toolkit for easy ANN-to-SNN conversion, which directly exports the learned ANN weights to SNNs requiring no post-conversion processing. We benchmarked our method on typical deep network structures with varying time-step lengths from 8 to 128. Compared to other research, our converted SNNs reported competitively high-accuracy performance, while enjoying relatively short inference time steps.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC10030499PMC
http://dx.doi.org/10.3389/fnins.2023.1141701DOI Listing

Publication Analysis

Top Keywords

ann-to-snn conversion
12
quantization-aware training
8
calcium-gated bipolar
8
bipolar leaky
8
leaky integrate
8
integrate fire
8
better approximate
8
high-accuracy deep
4
ann-to-snn
4
deep ann-to-snn
4

Similar Publications

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!