Publications by authors named "Ryad Benosman"

Article Synopsis
  • The proposed neuromimetic architecture enables constant pattern recognition using an enhanced event-based algorithm called Hierarchy Of Time-Surfaces (HOTS), built from data from a neuromorphic camera.
  • Improvements include homeostatic gain control to boost learning of patterns and a new mathematical model that relates HOTS to Spiking Neural Networks (SNN), transforming it into an online event-driven classifier.
  • Validation on datasets like Poker-DVS, N-MNIST, and DVS Gesture shows that this architecture excels at rapid object recognition through real-time event processing.
View Article and Find Full Text PDF

The amyloid precursor protein (APP) is linked to the genetics and pathogenesis of Alzheimer's disease (AD). It is the parent protein of the β-amyloid (Aβ) peptide, the main constituent of the amyloid plaques found in an AD brain. The pathways from APP to Aβ are intensively studied, yet the normal functions of APP itself have generated less interest.

View Article and Find Full Text PDF

Autonomous flight for large aircraft appears to be within our reach. However, launching autonomous systems for everyday missions still requires an immense interdisciplinary research effort supported by pointed policies and funding. We believe that concerted endeavors in the fields of neuroscience, mathematics, sensor physics, robotics, and computer science are needed to address remaining crucial scientific challenges.

View Article and Find Full Text PDF

The neural encoding of visual features in primary visual cortex (V1) is well understood, with strong correlates to low-level perception, making V1 a strong candidate for vision restoration through neuroprosthetics. However, the functional relevance of neural dynamics evoked through external stimulation directly imposed at the cortical level is poorly understood. Furthermore, protocols for designing cortical stimulation patterns that would induce a naturalistic perception of the encoded stimuli have not yet been established.

View Article and Find Full Text PDF

Vision restoration is an ideal medical application for optogenetics, because the eye provides direct optical access to the retina for stimulation. Optogenetic therapy could be used for diseases involving photoreceptor degeneration, such as retinitis pigmentosa or age-related macular degeneration. We describe here the selection, in non-human primates, of a specific optogenetic construct currently tested in a clinical trial.

View Article and Find Full Text PDF

We present the first purely event-based method for face detection using the high temporal resolution properties of an event-based camera to detect the presence of a face in a scene using eye blinks. Eye blinks are a unique and stable natural dynamic temporal signature of human faces across population that can be fully captured by event-based sensors. We show that eye blinks have a unique temporal signature over time that can be easily detected by correlating the acquired local activity with a generic temporal model of eye blinks that has been generated from a wide population of users.

View Article and Find Full Text PDF

Optical flow is a crucial component of the feature space for early visual processing of dynamic scenes especially in new applications such as self-driving vehicles, drones and autonomous robots. The dynamic vision sensors are well suited for such applications because of their asynchronous, sparse and temporally precise representation of the visual dynamics. Many algorithms proposed for computing visual flow for these sensors suffer from the aperture problem as the direction of the estimated flow is governed by the curvature of the object rather than the true motion direction.

View Article and Find Full Text PDF

Neuromorphic vision sensors detect changes in luminosity taking inspiration from mammalian retina and providing a stream of events with high temporal resolution, also known as Dynamic Vision Sensors (DVS). This continuous stream of events can be used to extract spatio-temporal patterns from a scene. A time-surface represents a spatio-temporal context for a given spatial radius around an incoming event from a sensor at a specific time history.

View Article and Find Full Text PDF

We evaluated the performance of a new device to control the administration of fluid alone or co-administration of fluid and norepinephrine in a pig model of haemorrhagic shock in two sets of experiments. In the first one, resuscitation was guided using continuous arterial pressure measurements (three groups: resuscitation with fluid by a physician, CL resuscitation with fluid, and CL resuscitation with fluid and norepinephrine). In the second one, resuscitation was guided using discontinuous arterial pressure measurements (three groups: CL resuscitation with fluid alone, CL resuscitation with fluid and moderate dose norepinephrine, and CL resuscitation with fluid and a high dose of norepinephrine).

View Article and Find Full Text PDF

Precise spike timing and temporal coding are used extensively within the nervous system of insects and in the sensory periphery of higher order animals. However, conventional Artificial Neural Networks (ANNs) and machine learning algorithms cannot take advantage of this coding strategy, due to their rate-based representation of signals. Even in the case of artificial Spiking Neural Networks (SNNs), identifying applications where temporal coding outperforms the rate coding strategies of ANNs is still an open challenge.

View Article and Find Full Text PDF

In this paper, we introduce a framework for dynamic gesture recognition with background suppression operating on the output of a moving event-based camera. The system is developed to operate in real-time using only the computational capabilities of a mobile phone. It introduces a new development around the concept of time-surfaces.

View Article and Find Full Text PDF

This paper introduces an new open-source, header-only and modular C++ framework to facilitate the implementation of event-driven algorithms. The framework relies on three independent components: (file IO), (algorithms), and (display). Our benchmarks show that algorithms implemented with are faster and have a lower latency than identical implementations in other state-of-the-art frameworks, thanks to static polymorphism (compile-time pipeline assembly).

View Article and Find Full Text PDF

Retinal dystrophies and age-related macular degeneration related to photoreceptor degeneration can cause blindness. In blind patients, although the electrical activation of the residual retinal circuit can provide useful artificial visual perception, the resolutions of current retinal prostheses have been limited either by large electrodes or small numbers of pixels. Here we report the evaluation, in three awake non-human primates, of a previously reported near-infrared-light-sensitive photovoltaic subretinal prosthesis.

View Article and Find Full Text PDF

Most dynamic systems are controlled by discrete time controllers. One of the main challenges faced during the design of a digital control law is the selection of the appropriate sampling time. A small sampling time will increase the accuracy of the controlled output at the expense of heavy computations.

View Article and Find Full Text PDF

In this work, we propose a two-layered descriptive model for motion processing from retina to the cortex, with an event-based input from the asynchronous time-based image sensor (ATIS) camera. Spatial and spatiotemporal filtering of visual scenes by motion energy detectors has been implemented in two steps in a simple layer of a lateral geniculate nucleus model and a set of three-dimensional Gabor kernels, eventually forming a probabilistic population response. The high temporal resolution of independent and asynchronous local sensory pixels from the ATIS provides a realistic stimulation to study biological motion processing, as well as developing bio-inspired motion processors for computer vision applications.

View Article and Find Full Text PDF

Depth from defocus is an important mechanism that enables vision systems to perceive depth. While machine vision has developed several algorithms to estimate depth from the amount of defocus present at the focal plane, existing techniques are slow, energy demanding and mainly relying on numerous acquisitions and massive amounts of filtering operations on the pixels' absolute luminance value. Recent advances in neuromorphic engineering allow an alternative to this problem, with the use of event-based silicon retinas and neural processing devices inspired by the organizing principles of the brain.

View Article and Find Full Text PDF

The optic quality of the eyes is, at least in part, determined by pupil size. Large pupils let more light enter the eyes, but degrade the point spread function, and thus the spatial resolution that can be achieved (Campbell & Gregory, 1960). In natural conditions, the pupil is mainly driven by the luminance (and possibly the color and contrast) at the gazed location, but is also modulated by attention and cognitive factors.

View Article and Find Full Text PDF

Johnson-Nyquist noise is the electronic noise generated by the thermal agitation of charge carriers, which increases when the sensor overheats. Current high-speed cameras used in low-light conditions are often cooled down to reduce thermal noise and increase their signal to noise ratio. These sensors, however, record hundreds of frames per second, which takes time, requires energy, and heavy computing power due to the substantial data load.

View Article and Find Full Text PDF

Background: Closed-loop resuscitation can improve personalization of care, decrease workload and bring expert knowledge in isolated areas. We have developed a new device to control the administration of fluid or simultaneous co-administration of fluid and norepinephrine using arterial pressure.

Method: We evaluated the performance of our prototype in a rodent model of haemorrhagic shock.

View Article and Find Full Text PDF

This paper introduces an event-based luminance-free algorithm for line and segment detection from the output of asynchronous event-based neuromorphic retinas. These recent biomimetic vision sensors are composed of autonomous pixels, each of them asynchronously generating visual events that encode relative changes in pixels' illumination at high temporal resolutions. This frame-free approach results in an increased energy efficiency and in real-time operation, making these sensors especially suitable for applications such as autonomous robotics.

View Article and Find Full Text PDF

3D reconstruction from multiple viewpoints is an important problem in machine vision that allows recovering tridimensional structures from multiple two-dimensional views of a given scene. Reconstructions from multiple views are conventionally achieved through a process of pixel luminance-based matching between different views. Unlike conventional machine vision methods that solve matching ambiguities by operating only on spatial constraints and luminance, this paper introduces a fully time-based solution to stereovision using the high temporal resolution of neuromorphic asynchronous event-based cameras.

View Article and Find Full Text PDF

As the interest in event-based vision sensors for mobile and aerial applications grows, there is an increasing need for high-speed and highly robust algorithms for performing visual tasks using event-based data. As event rate and network structure have a direct impact on the power consumed by such systems, it is important to explore the efficiency of the event-based encoding used by these sensors. The work presented in this paper represents the first study solely focused on the effects of both spatial and temporal downsampling on event-based vision data and makes use of a variety of data sets chosen to fully explore and characterize the nature of downsampling operations.

View Article and Find Full Text PDF
Article Synopsis
  • This paper presents a new neural network specifically designed to estimate optical flow using data from dynamic vision sensors that output spikes when they detect changes in light.
  • The system integrates an energy-efficient implementation, utilizing IBM's TrueNorth Neurosynaptic System to process incoming spike data with precise timing to determine the velocity of moving objects.
  • Evaluation of the system shows it has a low error rate of 11% while consuming less than 80 mW of power for both the sensor and processing.
View Article and Find Full Text PDF

Object tracking is a major problem for many computer vision applications, but it continues to be computationally expensive. The use of bio-inspired neuromorphic event-driven dynamic vision sensors (DVSs) has heralded new methods for vision processing, exploiting reduced amount of data and very precise timing resolutions. Previous studies have shown these neural spiking sensors to be well suited to implementing single-sensor object tracking systems, although they experience difficulties when solving ambiguities caused by object occlusion.

View Article and Find Full Text PDF

This paper introduces an event-based methodology to perform arbitrary linear basis transformations that encompass a broad range of practically important signal transforms, such as the discrete Fourier transform (DFT) and the discrete wavelet transform (DWT). We present a complexity analysis of the proposed method, and show that the amount of required multiply-and-accumulate operations is reduced in comparison to frame-based method in natural video sequences, when the required temporal resolution is high enough. Experimental results on natural video sequences acquired by the asynchronous time-based neuromorphic image sensor (ATIS) are provided to support the feasibility of the method, and to illustrate the gain in computation resources.

View Article and Find Full Text PDF