Spiking neural network algorithms require fine-tuned neuromorphic hardware to increase their effectiveness. Such hardware, mainly digital, is typically built on mature silicon nodes. Future artificial intelligence applications will demand the execution of tasks with increasing complexity and over timescales spanning several decades.
View Article and Find Full Text PDFPhase-encoded oscillating neural networks offer compelling advantages over metal-oxide-semiconductor-based technology for tackling complex optimization problems, with promising potential for ultralow power consumption and exceptionally rapid computational performance. In this work, we investigate the ability of these networks to solve optimization problems belonging to the nondeterministic polynomial time complexity class using nanoscale vanadium-dioxide-based oscillators integrated onto a Silicon platform. Specifically, we demonstrate how the dynamic behavior of coupled vanadium dioxide devices can effectively solve combinatorial optimization problems, including Graph Coloring, Max-cut, and Max-3SAT problems.
View Article and Find Full Text PDFOscillatory neural networks (ONNs) exhibit a high potential for energy-efficient computing. In ONNs, neurons are implemented with oscillators and synapses with resistive and/or capacitive coupling between pairs of oscillators. Computing is carried out on the basis of the rich, complex, non-linear synchronization dynamics of a system of coupled oscillators.
View Article and Find Full Text PDFAlternative paradigms to the von Neumann computing scheme are currently arousing huge interest. Oscillatory neural networks (ONNs) using emerging phase-change materials like VO constitute an energy-efficient, massively parallel, brain-inspired, in-memory computing approach. The encoding of information in the phase pattern of frequency-locked, weakly coupled oscillators makes it possible to exploit their rich non-linear dynamics and their synchronization phenomena for computing.
View Article and Find Full Text PDFFoveation can be defined as the organic action of directing the gaze towards a visual region of interest to acquire relevant information selectively. With the recent advent of event cameras, we believe that taking advantage of this visual neuroscience mechanism would greatly improve the efficiency of event data processing. Indeed, applying foveation to event data would allow to comprehend the visual scene while significantly reducing the amount of raw data to handle.
View Article and Find Full Text PDFIEEE Trans Biomed Circuits Syst
February 2024
Biologically plausible learning with neuronal dendrites is a promising perspective to improve the spike-driven learning capability by introducing dendritic processing as an additional hyperparameter. Neuromorphic computing is an effective and essential solution towards spike-based machine intelligence and neural learning systems. However, on-line learning capability for neuromorphic models is still an open challenge.
View Article and Find Full Text PDF. The compromise of the hippocampal loop is a hallmark of mesial temporal lobe epilepsy (MTLE), the most frequent epileptic syndrome in the adult population and the most often refractory to medical therapy. Hippocampal sclerosis is found in >50% of drug-refractory MTLE patients and primarily involves the CA1, consequently disrupting the hippocampal output to the entorhinal cortex (EC).
View Article and Find Full Text PDFThis paper describes a fully experimental hybrid system in which a [Formula: see text] memristive crossbar spiking neural network (SNN) was assembled using custom high-resistance state memristors with analogue CMOS neurons fabricated in 180 nm CMOS technology. The custom memristors used NMOS selector transistors, made available on a second 180 nm CMOS chip. One drawback is that memristors operate with currents in the micro-amperes range, while analogue CMOS neurons may need to operate with currents in the pico-amperes range.
View Article and Find Full Text PDFSpiking neural networks (SNNs) are regarded as a promising candidate to deal with the major challenges of current machine learning techniques, including the high energy consumption induced by deep neural networks. However, there is still a great gap between SNNs and the few-shot learning performance of artificial neural networks. Importantly, existing spike-based few-shot learning models do not target robust learning based on spatiotemporal dynamics and superior machine learning theory.
View Article and Find Full Text PDFWorking memory is a fundamental feature of biological brains for perception, cognition, and learning. In addition, learning with working memory, which has been show in conventional artificial intelligence systems through recurrent neural networks, is instrumental to advanced cognitive intelligence. However, it is hard to endow a simple neuron model with working memory, and to understand the biological mechanisms that have resulted in such a powerful ability at the neuronal level.
View Article and Find Full Text PDFLiquid State Machines (LSMs) are computing reservoirs composed of recurrently connected Spiking Neural Networks which have attracted research interest for their modeling capacity of biological structures and as promising pattern recognition tools suitable for their implementation in neuromorphic processors, benefited from the modest use of computing resources in their training process. However, it has been difficult to optimize LSMs for solving complex tasks such as event-based computer vision and few implementations in large-scale neuromorphic processors have been attempted. In this work, we show that offline-trained LSMs implemented in the SpiNNaker neuromorphic processor are able to classify visual events, achieving state-of-the-art performance in the event-based N-MNIST dataset.
View Article and Find Full Text PDFComputing paradigm based on von Neuman architectures cannot keep up with the ever-increasing data growth (also called "data deluge gap"). This has resulted in investigating novel computing paradigms and design approaches at all levels from materials to system-level implementations and applications. An alternative computing approach based on artificial neural networks uses oscillators to compute or Oscillatory Neural Networks (ONNs).
View Article and Find Full Text PDFBrain-inspired computing employs devices and architectures that emulate biological functions for more adaptive and energy-efficient systems. Oscillatory neural networks (ONNs) are an alternative approach in emulating biological functions of the human brain and are suitable for solving large and complex associative problems. In this work, we investigate the dynamics of coupled oscillators to implement such ONNs.
View Article and Find Full Text PDFOscillatory Neural Networks (ONNs) are currently arousing interest in the research community for their potential to implement very fast, ultra-low-power computing tasks by exploiting specific emerging technologies. From the architectural point of view, ONNs are based on the synchronization of oscillatory neurons in cognitive processing, as occurs in the human brain. As emerging technologies, VO and memristive devices show promising potential for the efficient implementation of ONNs.
View Article and Find Full Text PDFIEEE Trans Neural Netw Learn Syst
December 2022
Neuromorphic computing is a promising technology that realizes computation based on event-based spiking neural networks (SNNs). However, fault-tolerant on-chip learning remains a challenge in neuromorphic systems. This study presents the first scalable neuromorphic fault-tolerant context-dependent learning (FCL) hardware framework.
View Article and Find Full Text PDFNano-oscillators based on phase-transition materials are being explored for the implementation of different non-conventional computing paradigms. In particular, vanadium dioxide (VO) devices are used to design autonomous non-linear oscillators from which oscillatory neural networks (ONNs) can be developed. In this work, we propose a new architecture for ONNs in which sub-harmonic injection locking (SHIL) is exploited to ensure that the phase information encoded in each neuron can only take two values.
View Article and Find Full Text PDFA critical challenge in neuromorphic computing is to present computationally efficient algorithms of learning. When implementing gradient-based learning, error information must be routed through the network, such that each neuron knows its contribution to output, and thus how to adjust its weight. This is known as the credit assignment problem.
View Article and Find Full Text PDFIEEE Trans Neural Netw Learn Syst
May 2022
Neurophysiological observations confirm that the brain not only is able to detect the impaired synapses (in brain damage) but also it is relatively capable of repairing faulty synapses. It has been shown that retrograde signaling by astrocytes leads to the modulation of synaptic transmission and thus bidirectional collaboration of astrocyte with nearby neurons is an important aspect of self-repairing mechanism. Specifically, the retrograde signaling via astrocyte can increase the transmission probability of the healthy synapses linked to the neuron.
View Article and Find Full Text PDFIEEE Trans Biomed Circuits Syst
December 2020
The advent of dedicated Deep Learning (DL) accelerators and neuromorphic processors has brought on new opportunities for applying both Deep and Spiking Neural Network (SNN) algorithms to healthcare and biomedical applications at the edge. This can facilitate the advancement of medical Internet of Things (IoT) systems and Point of Care (PoC) devices. In this paper, we provide a tutorial describing how various technologies including emerging memristive devices, Field Programmable Gate Arrays (FPGAs), and Complementary Metal Oxide Semiconductor (CMOS) can be used to develop efficient DL accelerators to solve a wide variety of diagnostic, pattern recognition, and signal processing problems in healthcare.
View Article and Find Full Text PDFOne of the biggest struggles while working with artificial neural networks is being able to come up with models which closely match biological observations. Biological neural networks seem to capable of creating and pruning dendritic spines, leading to synapses being changed, which results in higher learning capability. The latter forms the basis of the present study in which a new ionic model for reservoir-like networks, consisting of spiking neurons, is introduced.
View Article and Find Full Text PDFNeurophysiological observations are clarifying how astrocytes can actively participate in information processing and how they can encode information through frequency and amplitude modulation of intracellular Ca signals. Consequently, hardware realization of astrocytes is important for developing the next generation of bio-inspired computing systems. In this paper, astrocytic calcium oscillations and neuronal firing dynamics are presented by De Pittà and IF (Integrated & Fire) models, respectively.
View Article and Find Full Text PDFNeural networks have enabled great advances in recent times due mainly to improved parallel computing capabilities in accordance to Moore's Law, which allowed reducing the time needed for the parameter learning of complex, multi-layered neural architectures. However, with silicon technology reaching its physical limits, new types of computing paradigms are needed to increase the power efficiency of learning algorithms, especially for dealing with deep spatio-temporal knowledge on embedded applications. With the goal of mimicking the brain's power efficiency, new hardware architectures such as the SpiNNaker board have been built.
View Article and Find Full Text PDFInspired by biology, neuromorphic systems have been trying to emulate the human brain for decades, taking advantage of its massive parallelism and sparse information coding. Recently, several large-scale hardware projects have demonstrated the outstanding capabilities of this paradigm for applications related to sensory information processing. These systems allow for the implementation of massive neural networks with millions of neurons and billions of synapses.
View Article and Find Full Text PDF