We have developed a neural network model capable of performing visual binding inspired by neuronal circuitry in the optic glomeruli of flies: a brain area that lies just downstream of the optic lobes where early visual processing is performed. This visual binding model is able to detect objects in dynamic image sequences and bind together their respective characteristic visual features-such as color, motion, and orientation-by taking advantage of their common temporal fluctuations. Visual binding is represented in the form of an inhibitory weight matrix which learns over time which features originate from a given visual object.
View Article and Find Full Text PDFVisual binding is the process of associating the responses of visual interneurons in different visual submodalities all of which are responding to the same object in the visual field. Recently identified neuropils in the insect brain termed optic glomeruli reside just downstream of the optic lobes and have an internal organization that could support visual binding. Working from anatomical similarities between optic and olfactory glomeruli, we have developed a model of visual binding based on common temporal fluctuations among signals of independent visual submodalities.
View Article and Find Full Text PDFAnnu Int Conf IEEE Eng Med Biol Soc
July 2016
When imitating biological sensors, we have not completely understood the early processing of the input to reproduce artificially. Building hybrid systems with both artificial and real biological components is a promising solution. For example, when a dragonfly is used as a living sensor, the early processing of visual information is performed fully in the brain of the dragonfly.
View Article and Find Full Text PDFCollision avoidance models derived from the study of insect brains do not perform universally well in practical collision scenarios, although the insects themselves may perform well in similar situations. In this article, we present a detailed simulation analysis of two well-known collision avoidance models and illustrate their limitations. In doing so, we present a novel continuous-time implementation of a neuronally based collision avoidance model.
View Article and Find Full Text PDFVis Neurosci
September 2011
Motion-sensitive neurons in the visual systems of many species, including humans, exhibit a depression of motion responses immediately after being exposed to rapidly moving images. This motion adaptation has been extensively studied in flies, but a neuronal mechanism that explains the most prominent component of adaptation, which occurs regardless of the direction of motion of the visual stimulus, has yet to be proposed. We identify a neuronal mechanism, namely frequency-dependent synaptic depression, which explains a number of the features of adaptation in mammalian motion-sensitive neurons and use it to model fly motion adaptation.
View Article and Find Full Text PDFInsect navigational behaviors including obstacle avoidance, grazing landings, and visual odometry are dependent on the ability to estimate flight speed based only on visual cues. In honeybees, this visual estimate of speed is largely independent of both the direction of motion and the spatial frequency content of the image. Electrophysiological recordings from the motion-sensitive cells believed to underlie these behaviors have long supported spatio-temporally tuned correlation-type models of visual motion detection whose speed tuning changes as the spatial frequency of a stimulus is varied.
View Article and Find Full Text PDFInsects use visual estimates of flight speed for a variety of behaviors, including visual navigation, odometry, grazing landings and flight speed control, but the neuronal mechanisms underlying speed detection remain unknown. Although many models and theories have been proposed for how the brain extracts the angular speed of the retinal image, termed optic flow, we lack the detailed electrophysiological and behavioral data necessary to conclusively support any one model. One key property by which different models of motion detection can be differentiated is their spatiotemporal frequency tuning.
View Article and Find Full Text PDFConf Proc IEEE Eng Med Biol Soc
February 2008
The objective of this study is to improve the quality of life for the visually impaired by restoring their ability to self-navigate. In this paper we describe a compact, wearable device that converts visual information into a tactile signal. This device, constructed entirely from commercially available parts, enables the user to perceive distant objects via a different sensory modality.
View Article and Find Full Text PDFFlies have the capability to visually track small moving targets, even across cluttered backgrounds. Previous computational models, based on figure detection (FD) cells identified in the fly, have suggested how this may be accomplished at a neuronal level based on information about relative motion between the target and the background. We experimented with the use of this "small-field system model" for the tracking of small moving targets by a simulated fly in a cluttered environment and discovered some functional limitations.
View Article and Find Full Text PDFBased on comparative anatomical studies and electrophysiological experiments, we have identified a conserved subset of neurons in the lamina, medulla, and lobula of dipterous insects that are involved in retinotopic visual motion direction selectivity. Working from the photoreceptors inward, this neuronal subset includes lamina amacrine (alpha) cells, lamina monopolar (L2) cells, the basket T-cell (T1 or beta), the transmedullary cell Tm1, and the T5 bushy T-cell. Two GABA-immunoreactive neurons, the transmedullary cell Tm9 and a local interneuron at the level of T5 dendrites, are also implicated in the motion computation.
View Article and Find Full Text PDFBiol Cybern
November 2004
Behavioral experiments suggest that insects make use of the apparent image speed on their compound eyes to navigate through obstacles, control flight speed, land smoothly, and measure the distance they have flown. However, the vast majority of electrophysiological recordings from motion-sensitive insect neurons show responses which are tuned in spatial and temporal frequency and are thus unable to unambiguously represent image speed. We suggest that this contradiction may be resolved at an early stage of visual motion processing using nondirectional motion sensors that respond proportionally to image speed until their peak response.
View Article and Find Full Text PDF