Liquid State Machines (LSMs) are computing reservoirs composed of recurrently connected Spiking Neural Networks which have attracted research interest for their modeling capacity of biological structures and as promising pattern recognition tools suitable for their implementation in neuromorphic processors, benefited from the modest use of computing resources in their training process. However, it has been difficult to optimize LSMs for solving complex tasks such as event-based computer vision and few implementations in large-scale neuromorphic processors have been attempted. In this work, we show that offline-trained LSMs implemented in the SpiNNaker neuromorphic processor are able to classify visual events, achieving state-of-the-art performance in the event-based N-MNIST dataset.
View Article and Find Full Text PDFSome complex dissipative dynamics associated with the noise-like pulse (NLP) regime of a passively mode-locked erbium-doped fiber laser are studied numerically. By means of a convenient 3D mapping of the spatio-temporal pulse evolution, for properly chosen dispersion parameters, several puzzling dissipative dynamics of NLPs are identified, including the expelling of sub-packets that move away from the main bunch, the sudden extinction of isolated sub-pulses, the collision between different internal fragments travelling at different speeds, the rising of sub-pulses, the formation of complex trajectories by substructures that first move away and then return to the main bunch, and so on. In addition, the emergence of optical rogue waves (ORWs) within NLPs is also demonstrated numerically; to help understand these behaviors evidenced in the time domain, spectral analyzes were also performed that show, among other things, that the spectrum of a NLP is notoriously distorted when it hosts an ORW phenomenon.
View Article and Find Full Text PDFNeural networks have enabled great advances in recent times due mainly to improved parallel computing capabilities in accordance to Moore's Law, which allowed reducing the time needed for the parameter learning of complex, multi-layered neural architectures. However, with silicon technology reaching its physical limits, new types of computing paradigms are needed to increase the power efficiency of learning algorithms, especially for dealing with deep spatio-temporal knowledge on embedded applications. With the goal of mimicking the brain's power efficiency, new hardware architectures such as the SpiNNaker board have been built.
View Article and Find Full Text PDFThis paper presents a grammatical evolution (GE)-based methodology to automatically design third generation artificial neural networks (ANNs), also known as spiking neural networks (SNNs), for solving supervised classification problems. The proposal performs the SNN design by exploring the search space of three-layered feedforward topologies with configured synaptic connections (weights and delays) so that no explicit training is carried out. Besides, the designed SNNs have partial connections between input and hidden layers which may contribute to avoid redundancies and reduce the dimensionality of input feature vectors.
View Article and Find Full Text PDF