The potential low-energy feature of the spiking neural network (SNN) engages the attention of the AI community. Only CPU-involved SNN processing inevitably results in an inherently long temporal span in the cases of large models and massive datasets. This study introduces the MAC array, a parallel architecture on each processing element (PE) of SpiNNaker 2, into the computational process of SNN inference.
View Article and Find Full Text PDFIntroduction: In recent years, the application of deep learning models at the edge has gained attention. Typically, artificial neural networks (ANNs) are trained on graphics processing units (GPUs) and optimized for efficient execution on edge devices. Training ANNs directly at the edge is the next step with many applications such as the adaptation of models to specific situations like changes in environmental settings or optimization for individuals, e.
View Article and Find Full Text PDFInt J Neural Syst
December 2022
It has been proven that, compared to using 32-bit floating-point numbers in the training phase, Deep Convolutional Neural Networks (DCNNs) can operate with low-precision during inference, thereby saving memory footprint and power consumption. However, neural network quantization is always accompanied by accuracy degradation. Here, we propose a quantization method called double-stage Squeeze-and-Threshold (double-stage ST) to close the accuracy gap with full-precision models.
View Article and Find Full Text PDFDeveloping technologies for coupling neural activity and artificial neural components, is key for advancing neural interfaces and neuroprosthetics. We present a biohybrid experimental setting, where the activity of a biological neural network is coupled to a biomimetic hardware network. The implementation of the hardware network (denoted NeuroSoC) exhibits complex dynamics with a multiplicity of time-scales, emulating 2880 neurons and 12.
View Article and Find Full Text PDFThe memory requirement of deep learning algorithms is considered incompatible with the memory restriction of energy-efficient hardware. A low memory footprint can be achieved by pruning obsolete connections or reducing the precision of connection strengths after the network has been trained. Yet, these techniques are not applicable to the case when neural networks have to be trained directly on hardware due to the hard memory constraints.
View Article and Find Full Text PDFSynaptic plasticity plays a crucial role in allowing neural networks to learn and adapt to various input environments. Neuromorphic systems need to implement plastic synapses to obtain basic "cognitive" capabilities such as learning. One promising and scalable approach for implementing neuromorphic synapses is to use nano-scale memristors as synaptic elements.
View Article and Find Full Text PDFMemristive devices are popular among neuromorphic engineers for their ability to emulate forms of spike-driven synaptic plasticity by applying specific voltage and current waveforms at their two terminals. In this paper, we investigate spike-timing dependent plasticity (STDP) with a single pairing of one presynaptic voltage spike and one post-synaptic voltage spike in a BiFeO3 memristive device. In most memristive materials the learning window is primarily a function of the material characteristics and not of the applied waveform.
View Article and Find Full Text PDFMemristive devices present a new device technology allowing for the realization of compact non-volatile memories. Some of them are already in the process of industrialization. Additionally, they exhibit complex multilevel and plastic behaviors, which make them good candidates for the implementation of artificial synapses in neuromorphic engineering.
View Article and Find Full Text PDFSynaptic dynamics, such as long- and short-term plasticity, play an important role in the complexity and biological realism achievable when running neural networks on a neuromorphic IC. For example, they endow the IC with an ability to adapt and learn from its environment. In order to achieve the millisecond to second time constants required for these synaptic dynamics, analog subthreshold circuits are usually employed.
View Article and Find Full Text PDFEfficient Analog-Digital Converters (ADC) are one of the mainstays of mixed-signal integrated circuit design. Besides the conventional ADCs used in mainstream ICs, there have been various attempts in the past to utilize neuromorphic networks to accomplish an efficient crossing between analog and digital domains, i.e.
View Article and Find Full Text PDFState-of-the-art large-scale neuromorphic systems require sophisticated spike event communication between units of the neural network. We present a high-speed communication infrastructure for a waferscale neuromorphic system, based on application-specific neuromorphic communication ICs in an field programmable gate arrays (FPGA)-maintained environment. The ICs implement configurable axonal delays, as required for certain types of dynamic processing or for emulating spike-based learning among distant cortical areas.
View Article and Find Full Text PDFFront Synaptic Neurosci
July 2011
Classically, action-potential-based learning paradigms such as the Bienenstock-Cooper-Munroe (BCM) rule for pulse rates or spike timing-dependent plasticity for pulse pairings have been experimentally demonstrated to evoke long-lasting synaptic weight changes (i.e., plasticity).
View Article and Find Full Text PDF