In order to safely navigate and orient in their local surroundings autonomous systems need to rapidly extract and persistently track visual features from the environment. While there are many algorithms tackling those tasks for traditional frame-based cameras, these have to deal with the fact that conventional cameras sample their environment with a fixed frequency. Most prominently, the same features have to be found in consecutive frames and corresponding features then need to be matched using elaborate techniques as any information between the two frames is lost. We introduce a novel method to detect and track line structures in data streams of event-based silicon retinae [also known as dynamic vision sensors (DVS)]. In contrast to conventional cameras, these biologically inspired sensors generate a quasicontinuous stream of vision information analogous to the information stream created by the ganglion cells in mammal retinae. All pixels of DVS operate asynchronously without a periodic sampling rate and emit a so-called DVS address event as soon as they perceive a luminance change exceeding an adjustable threshold. We use the high temporal resolution achieved by the DVS to track features continuously through time instead of only at fixed points in time. The focus of this work lies on tracking lines in a mostly static environment which is observed by a moving camera, a typical setting in mobile robotics. Since DVS events are mostly generated at object boundaries and edges which in man-made environments often form lines they were chosen as feature to track. Our method is based on detecting planes of DVS address events in x-y-t-space and tracing these planes through time. It is robust against noise and runs in real time on a standard computer, hence it is suitable for low latency robotics. The efficacy and performance are evaluated on real-world data sets which show artificial structures in an office-building using event data for tracking and frame data for ground-truth estimation from a DAVIS240C sensor.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC5825909 | PMC |
http://dx.doi.org/10.3389/fnbot.2018.00004 | DOI Listing |
Psychol Aging
February 2025
Wilhelm Wundt Institute of Psychology, Leipzig University.
In this editorial, I outline two key changes to the submission guidelines, and I present my vision as the new editor for Psychology and Aging, the premier outlet for psychological research on aging and adult lifespan development. To enhance the impact of research published in the journal, my editorial team and I will accept articles that make strong theoretical contributions, are methodologically rigorous and transparent, use open science practices, contribute cumulative knowledge to the field, and have important practical implications. We will continue to publish high-quality empirical articles, systematic reviews, and meta-analyses, as well as theory development and methodological articles from all areas of psychology and related disciplines that focus on basic principles of aging and adult lifespan development or that investigate these principles in applied settings.
View Article and Find Full Text PDFJ Exp Biol
January 2025
Centre de Recherches sur la Cognition Animale, CNRS, Université Paul Sabatier, Toulouse 31062 cedex 09, France.
Solitary foraging insects like desert ants rely heavily on vision for navigation. While ants can learn visual scenes, it is unclear what cues they use to decide if a scene is worth exploring at the first place. To investigate this, we recorded the motor behavior of Cataglyphis velox ants navigating in a virtual reality set-up (VR) and measured their lateral oscillations in response to various unfamiliar visual scenes under both closed-loop and open-loop conditions.
View Article and Find Full Text PDFFront Robot AI
January 2025
Life- and Neurosciences, Frankfurt Institute for Advanced Studies, Frankfurt am Main, Germany.
Biological vision systems simultaneously learn to efficiently encode their visual inputs and to control the movements of their eyes based on the visual input they sample. This autonomous joint learning of visual representations and actions has previously been modeled in the Active Efficient Coding (AEC) framework and implemented using traditional frame-based cameras. However, modern event-based cameras are inspired by the retina and offer advantages in terms of acquisition rate, dynamic range, and power consumption.
View Article and Find Full Text PDFJ Neurosci
January 2025
The Department of Psychology and The Department of Cognitive and Brain Sciences, The Hebrew University of Jerusalem.
Predictive updating of an object's spatial coordinates from pre-saccade to post-saccade contributes to stable visual perception. Whether object features are predictively remapped remains contested. We set out to characterise the spatiotemporal dynamics of feature processing during stable fixation and active vision.
View Article and Find Full Text PDFSci Rep
January 2025
Centre for Applied Photonics, INESC TEC, Rua do Campo Alegre 687, 4169-007, Porto, Portugal.
Spectral Imaging techniques such as Laser-induced Breakdown Spectroscopy (LIBS) and Raman Spectroscopy (RS) enable the localized acquisition of spectral data, providing insights into the presence, quantity, and spatial distribution of chemical elements or molecules within a sample. This significantly expands the accessible information compared to conventional imaging approaches such as machine vision. However, despite its potential, spectral imaging also faces specific challenges depending on the limitations of the spectroscopy technique used, such as signal saturation, matrix interferences, fluorescence, or background emission.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!