Brain-inspired spiking neural networks (SNNs) are increasingly explored for their potential in spatiotemporal information modeling and energy efficiency on emerging neuromorphic hardware. Recent works incorporate attentional modules into SNNs, greatly enhancing their capabilities in handling sequential data. However, these parameterized attentional modules have placed a huge burden on memory consumption, a factor that is constrained on neuromorphic chips. To address this issue, we propose a parameter-free attention (PfA) mechanism that establishes a parameter-free linear space to bolster feature representation. The proposed PfA approach can be seamlessly integrated into the spiking neuron, resulting in enhanced performance without any increase in parameters. The experimental results on the SHD, BAE-TIDIGITS, SSC, DVS-Gesture, DVS-Cifar10, Cifar10, and Cifar100 datasets well demonstrate its competitive or superior classification accuracy compared with other state-of-the-art models. Furthermore, our model exhibits stronger noise robustness than conventional SNNs and those with parameterized attentional mechanisms. Our codes can be accessible at https://github.com/sunpengfei1122/PfA-SNN.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1016/j.neunet.2025.107154 | DOI Listing |
Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!