AI Article Synopsis

  • Machine learning has gained significant traction in both research and industry, especially in areas like image classification, but its reliance on large computing power makes it less suitable for embedded systems.
  • To address this, researchers are exploring brain-inspired computing as an energy-efficient alternative to traditional CPU/GPU systems, focusing on designing adaptable neuromorphic hardware for parallel and distributed computations.
  • The paper introduces a framework for neuromorphic hardware design, along with a simulator named NAXT, to optimize Spiking Neural Networks by modifying coding techniques to reduce spike events, thereby lowering power consumption while maintaining performance.

Article Abstract

Machine learning is yielding unprecedented interest in research and industry, due to recent success in many applied contexts such as image classification and object recognition. However, the deployment of these systems requires huge computing capabilities, thus making them unsuitable for embedded systems. To deal with this limitation, many researchers are investigating brain-inspired computing, which would be a perfect alternative to the conventional Von Neumann architecture based computers (CPU/GPU) that meet the requirements for computing performance, but not for energy-efficiency. Therefore, neuromorphic hardware circuits that are adaptable for both parallel and distributed computations need to be designed. In this paper, we focus on Spiking Neural Networks (SNNs) with a comprehensive study of neural coding methods and hardware exploration. In this context, we propose a framework for neuromorphic hardware design space exploration, which allows to define a suitable architecture based on application-specific constraints and starting from a wide variety of possible architectural choices. For this framework, we have developed a behavioral level simulator for neuromorphic hardware architectural exploration named NAXT. Moreover, we propose modified versions of the standard Rate Coding technique to make trade-offs with the Time Coding paradigm, which is characterized by the low number of spikes propagating in the network. Thus, we are able to reduce the number of spikes while keeping the same neuron's model, which results in an SNN with fewer events to process. By doing so, we seek to reduce the amount of power consumed by the hardware. Furthermore, we present three neuromorphic hardware architectures in order to quantitatively study the implementation of SNNs. One of these architectures integrates a novel hybrid structure: a highly-parallel computation core for most solicited layers, and time-multiplexed computation units for deeper layers. These architectures are derived from a novel funnel-like Design Space Exploration framework for neuromorphic hardware.

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.neunet.2019.09.024DOI Listing

Publication Analysis

Top Keywords

neuromorphic hardware
20
design space
12
space exploration
12
hardware
8
architecture based
8
framework neuromorphic
8
number spikes
8
exploration
5
neuromorphic
5
exploration hardware
4

Similar Publications

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!