This paper presents a novel N-ocular 3D reconstruction algorithm for event-based vision data from bio-inspired artificial retina sensors. Artificial retinas capture visual information asynchronously and encode it into streams of asynchronous spike-like pulse signals carrying information on, e.g., temporal contrast events in the scene. The precise time of the occurrence of these visual features are implicitly encoded in the spike timings. Due to the high temporal resolution of the asynchronous visual information acquisition, the output of these sensors is ideally suited for dynamic 3D reconstruction. The presented technique takes full benefit of the event-driven operation, i.e. events are processed individually at the moment they arrive. This strategy allows us to preserve the original dynamics of the scene, hence allowing for more robust 3D reconstructions. As opposed to existing techniques, this algorithm is based on geometric and time constraints alone, making it particularly simple to implement and largely linear.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1016/j.neunet.2013.03.006 | DOI Listing |
Sensors (Basel)
November 2024
Artificial Intelligence and Robotics Lab (AIRLab), Department of Computer Science, Saint Louis University, Saint Louis, MO 63103, USA.
In this paper, we present , the first successful application of neuromorphic for Wide-Area Motion Imagery (WAMI) and Remote Sensing (RS), showcasing their potential for advancing Structure-from-Motion (SfM) and 3D reconstruction across diverse imaging scenarios. ECs, which detect asynchronous pixel-level , offer key advantages over traditional frame-based sensors such as high temporal resolution, low power consumption, and resilience to dynamic lighting. These capabilities allow ECs to overcome challenges such as glare, uneven lighting, and low-light conditions that are common in aerial imaging and remote sensing, while also extending UAV flight endurance.
View Article and Find Full Text PDFEvent cameras, inspired by biological vision, offer high dynamic range, excellent temporal resolution, and minimal data redundancy. Precise calibration of event camera systems is essential for applications such as 3D vision. The cessation of extra gray frame production in popular models like the dynamic vision sensor (DVS) poses significant challenges to achieving high-accuracy calibration.
View Article and Find Full Text PDFHealth Technol Assess
October 2024
Institute of Applied Health Research, College of Medical and Dental Sciences, University of Birmingham, Edgbaston, Birmingham, UK.
Commun Psychol
July 2024
Centre for Human Brain Health, University of Birmingham, Birmingham, UK.
When we recall a past event, we reconstruct the event based on a combination of episodic details and semantic knowledge (e.g., prototypes).
View Article and Find Full Text PDFBiomimetics (Basel)
July 2024
School of Engineering, Edith Cowan University, Perth, WA 6027, Australia.
Simultaneous Localization and Mapping (SLAM) is a crucial function for most autonomous systems, allowing them to both navigate through and create maps of unfamiliar surroundings. Traditional Visual SLAM, also commonly known as VSLAM, relies on frame-based cameras and structured processing pipelines, which face challenges in dynamic or low-light environments. However, recent advancements in event camera technology and neuromorphic processing offer promising opportunities to overcome these limitations.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!