In recent years, an increasing number of studies have investigated the effects of closed-loop anti-epileptic treatments. Most of the current research still is very labour intensive: real-time treatment is manually triggered and conclusions can only be drawn after multiple days of manual review and annotation of the electroencephalogram (EEG). In this paper we propose a technique based on reservoir computing (RC) to automatically and in real-time detect epileptic seizures in the intra-cranial EEG (iEEG) of epileptic rats in order to immediately trigger seizure treatment.
View Article and Find Full Text PDFIntroduction: In this paper we propose a technique based on reservoir computing (RC) to mark epileptic seizures on the intra-cranial electroencephalogram (EEG) of rats. RC is a recurrent neural networks training technique which has been shown to possess good generalization properties with limited training.
Materials: The system is evaluated on data containing two different seizure types: absence seizures from genetic absence epilepsy rats from Strasbourg (GAERS) and tonic-clonic seizures from kainate-induced temporal-lobe epilepsy rats.
The simulation of spiking neural networks (SNNs) is known to be a very time-consuming task. This limits the size of SNN that can be simulated in reasonable time or forces users to overly limit the complexity of the neuron models. This is one of the driving forces behind much of the recent research on event-driven simulation strategies.
View Article and Find Full Text PDFReservoir Computing (RC) techniques use a fixed (usually randomly created) recurrent neural network, or more generally any dynamic system, which operates at the edge of stability, where only a linear static readout output layer is trained by standard linear regression methods. In this work, RC is used for detecting complex events in autonomous robot navigation. This can be extended to robot localization tasks which are solely based on a few low-range, high-noise sensory data.
View Article and Find Full Text PDFThree different uses of a recurrent neural network (RNN) as a reservoir that is not trained but instead read out by a simple external classification layer have been described in the literature: Liquid State Machines (LSMs), Echo State Networks (ESNs) and the Backpropagation Decorrelation (BPDC) learning rule. Individual descriptions of these techniques exist, but a overview is still lacking. Here, we present a series of experimental results that compares all three implementations, and draw conclusions about the relation between a broad range of reservoir parameters and network dynamics, memory, node complexity and performance on a variety of benchmark tests with different characteristics.
View Article and Find Full Text PDF