Benchmarking Artificial Neural Network Architectures for High-Performance Spiking Neural Networks.

Sensors (Basel)

Department of Computer Science and Electrical Engineering, University of Maryland, Baltimore County, Baltimore, MD 21250, USA.

Published: February 2024

Organizations managing high-performance computing systems face a multitude of challenges, including overarching concerns such as overall energy consumption, microprocessor clock frequency limitations, and the escalating costs associated with chip production. Evidently, processor speeds have plateaued over the last decade, persisting within the range of 2 GHz to 5 GHz. Scholars assert that brain-inspired computing holds substantial promise for mitigating these challenges. The spiking neural network (SNN) particularly stands out for its commendable power efficiency when juxtaposed with conventional design paradigms. Nevertheless, our scrutiny has brought to light several pivotal challenges impeding the seamless implementation of large-scale neural networks (NNs) on silicon. These challenges encompass the absence of automated tools, the need for multifaceted domain expertise, and the inadequacy of existing algorithms to efficiently partition and place extensive SNN computations onto hardware infrastructure. In this paper, we posit the development of an automated tool flow capable of transmuting any NN into an SNN. This undertaking involves the creation of a novel graph-partitioning algorithm designed to strategically place SNNs on a network-on-chip (NoC), thereby paving the way for future energy-efficient and high-performance computing paradigms. The presented methodology showcases its effectiveness by successfully transforming ANN architectures into SNNs with a marginal average error penalty of merely 2.65%. The proposed graph-partitioning algorithm enables a 14.22% decrease in inter-synaptic communication and an 87.58% reduction in intra-synaptic communication, on average, underscoring the effectiveness of the proposed algorithm in optimizing NN communication pathways. Compared to a baseline graph-partitioning algorithm, the proposed approach exhibits an average decrease of 79.74% in latency and a 14.67% reduction in energy consumption. Using existing NoC tools, the energy-latency product of SNN architectures is, on average, 82.71% lower than that of the baseline architectures.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC10892219PMC
http://dx.doi.org/10.3390/s24041329DOI Listing

Publication Analysis

Top Keywords

graph-partitioning algorithm
12
neural network
8
spiking neural
8
neural networks
8
high-performance computing
8
energy consumption
8
benchmarking artificial
4
neural
4
artificial neural
4
architectures
4

Similar Publications

The objective of the max-cut problem is to cut any graph in such a way that the total weight of the edges that are cut off is maximum in both subsets of vertices that are divided due to the cut of the edges. Although it is an elementary graph partitioning problem, it is one of the most challenging combinatorial optimization-based problems, and tons of application areas make this problem highly admissible. Due to its admissibility, the problem is solved using the Harris Hawk Optimization algorithm (HHO).

View Article and Find Full Text PDF

An end-to-end bi-objective approach to deep graph partitioning.

Neural Netw

January 2025

Information Systems Technology and Design Pillar, Singapore University of Technology and Design, 485998, Singapore. Electronic address:

Graphs are ubiquitous in real-world applications, such as computation graphs and social networks. Partitioning large graphs into smaller, balanced partitions is often essential, with the bi-objective graph partitioning problem aiming to minimize both the "cut" across partitions and the imbalance in partition sizes. However, existing heuristic methods face scalability challenges or overlook partition balance, leading to suboptimal results.

View Article and Find Full Text PDF

Recently, there has been growing interest in deep spectral methods for image localization and segmentation, influenced by traditional spectral segmentation approaches. These methods reframe the image decomposition process as a graph partitioning task by extracting features using self-supervised learning and utilizing the Laplacian of the affinity matrix to obtain eigensegments. However, instance segmentation has received less attention than other tasks within the context of deep spectral methods.

View Article and Find Full Text PDF

The recent trend in using network and graph structures to represent a variety of different data types has renewed interest in the graph partitioning (GP) problem. This interest stems from the need for general methods that can both efficiently identify network communities and reduce the dimensionality of large graphs while satisfying various application-specific criteria. Traditional clustering algorithms often struggle to capture the complex relationships within graphs and generalize to arbitrary clustering criteria.

View Article and Find Full Text PDF

Superpixel aggregation is a powerful tool for automated neuron segmentation from electron microscopy (EM) volume. However, existing graph partitioning methods for superpixel aggregation still involve two separate stages-model estimation and model solving, and therefore model error is inherent. To address this issue, we integrate the two stages and propose an end-to-end aggregation framework based on deep learning of the minimum cost multicut problem called DeepMulticut.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!