Recent developments in neuromorphic hardware engineering make mixed-signal VLSI neural network models promising candidates for neuroscientific research tools and massively parallel computing devices, especially for tasks which exhaust the computing power of software simulations. Still, like all analog hardware systems, neuromorphic models suffer from a constricted configurability and production-related fluctuations of device characteristics. Since also future systems, involving ever-smaller structures, will inevitably exhibit such inhomogeneities on the unit level, self-regulation properties become a crucial requirement for their successful operation. By applying a cortically inspired self-adjusting network architecture, we show that the activity of generic spiking neural networks emulated on a neuromorphic hardware system can be kept within a biologically realistic firing regime and gain a remarkable robustness against transistor-level variations. As a first approach of this kind in engineering practice, the short-term synaptic depression and facilitation mechanisms implemented within an analog VLSI model of I&F neurons are functionally utilized for the purpose of network level stabilization. We present experimental data acquired both from the hardware model and from comparative software simulations which prove the applicability of the employed paradigm to neuromorphic VLSI devices.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2965017PMC
http://dx.doi.org/10.3389/fncom.2010.00129DOI Listing

Publication Analysis

Top Keywords

neuromorphic vlsi
8
vlsi devices
8
short-term synaptic
8
neuromorphic hardware
8
software simulations
8
neuromorphic
5
compensating inhomogeneities
4
inhomogeneities neuromorphic
4
vlsi
4
devices short-term
4

Similar Publications

Applying the Wake-Up-like Effect to Enhance the Capabilities of Two-Dimensional Ferroelectric Field-Effect Transistors.

ACS Appl Mater Interfaces

May 2024

National Key Laboratory of Advanced Micro and Nano Manufacture Technology, School of Integrated Circuits, Peking University, Beijing 100871, China.

For traditional ferroelectric field-effect transistors (FeFETs), enhancing the polarization domain of bulk ferroelectric materials is essential to improve device performance. However, there has been limited investigation into the enhancement of polarization field in two-dimensional (2D) ferroelectric material such as CuInPS (CIPS). In this study, similar to bulk ferroelectric materials, CIPS exhibited enhanced polarization field upon application of external cyclic voltage.

View Article and Find Full Text PDF

In recent years, memristors have successfully demonstrated their significant potential in artificial neural networks (ANNs) and neuromorphic computing. Nonetheless, ANNs constructed by crossbar arrays suffer from cross-talk issues and low integration densities. Here, we propose an eight-layer three-dimensional (3D) vertical crossbar memristor with an ultrahigh rectify ratio (RR > 10) and an ultrahigh nonlinearity (>10) to overcome these limitations, which enables it to reach a >1 Tb array size without reading failure.

View Article and Find Full Text PDF

Efficient SNN multi-cores MAC array acceleration on SpiNNaker 2.

Front Neurosci

August 2023

Highly-Parallel VLSI-Systems and Neuro-Microelectronics, Faculty of Electrical and Computer Engineering, Institute of Principles of Electrical and Electronic Engineering, Technische Universität Dresden, Dresden, Germany.

The potential low-energy feature of the spiking neural network (SNN) engages the attention of the AI community. Only CPU-involved SNN processing inevitably results in an inherently long temporal span in the cases of large models and massive datasets. This study introduces the MAC array, a parallel architecture on each processing element (PE) of SpiNNaker 2, into the computational process of SNN inference.

View Article and Find Full Text PDF

We live in a technologically advanced society where we all use semiconductor chips in the majority of our gadgets, and the basic criterion concerning data storage and memory is a small footprint and low power consumption. SRAM is a very important part of this and can be used to meet all the above criteria. In this study, LTSpice software is used to come up with a high-performance sense amplifier circuit for low-power SRAM applications.

View Article and Find Full Text PDF

E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware.

Front Neurosci

November 2022

Chair of Highly-Parallel VLSI-Systems and Neuro-Microelectronics, Faculty of Electrical and Computer Engineering, Institute of Principles of Electrical and Electronic Engineering, Technische Universität Dresden, Dresden, Germany.

Introduction: In recent years, the application of deep learning models at the edge has gained attention. Typically, artificial neural networks (ANNs) are trained on graphics processing units (GPUs) and optimized for efficient execution on edge devices. Training ANNs directly at the edge is the next step with many applications such as the adaptation of models to specific situations like changes in environmental settings or optimization for individuals, e.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!