Real-World Spatial Synchronization of Event-CMOS Cameras through Deep Learning: A Novel CNN-DGCNN Approach.

Sensors (Basel)

Department of Industrial Engineering and Management, Ariel University, Ariel 40700, Israel.

Published: June 2024

This paper presents a new deep-learning architecture designed to enhance the spatial synchronization between CMOS and event cameras by harnessing their complementary characteristics. While CMOS cameras produce high-quality imagery, they struggle in rapidly changing environments-a limitation that event cameras overcome due to their superior temporal resolution and motion clarity. However, effective integration of these two technologies relies on achieving precise spatial alignment, a challenge unaddressed by current algorithms. Our architecture leverages a dynamic graph convolutional neural network (DGCNN) to process event data directly, improving synchronization accuracy. We found that synchronization precision strongly correlates with the spatial concentration and density of events, with denser distributions yielding better alignment results. Our empirical results demonstrate that areas with denser event clusters enhance calibration accuracy, with calibration errors increasing in more uniformly distributed event scenarios. This research pioneers scene-based synchronization between CMOS and event cameras, paving the way for advancements in mixed-modality visual systems. The implications are significant for applications requiring detailed visual and temporal information, setting new directions for the future of visual perception technologies.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC11244468PMC
http://dx.doi.org/10.3390/s24134050DOI Listing

Publication Analysis

Top Keywords

event cameras
12
spatial synchronization
8
synchronization cmos
8
cmos event
8
event
6
synchronization
5
cameras
5
real-world spatial
4
synchronization event-cmos
4
event-cmos cameras
4

Similar Publications

Smart cities deploy various sensors such as microphones and RGB cameras to collect data to improve the safety and comfort of the citizens. As data annotation is expensive, self-supervised methods such as contrastive learning are used to learn audio-visual representations for downstream tasks. Focusing on surveillance data, we investigate two common limitations of audio-visual contrastive learning: false negatives and the minimal sufficient information bottleneck.

View Article and Find Full Text PDF

Background And Purpose: Physical therapists play a vital role in preventing and managing falls in older adults. With advancements in digital health and technology, community fall prevention programs need to adopt valid and reliable telehealth-based assessments. The purpose of this study was to evaluate the validity and reliability of the telehealth-based timed up and go (TUG) test, 30-second chair stand test (30s-CST), and four-stage (4-stage) balance test as functional components of the Stopping Elderly Accidents, Deaths, and Injuries (STEADI) fall risk assessment.

View Article and Find Full Text PDF

Significance: Functional brain imaging experiments in awake animals require meticulous monitoring of animal behavior to screen for spontaneous behavioral events. Although these events occur naturally, they can alter cell signaling and hemodynamic activity in the brain and confound functional brain imaging measurements.

Aim: We developed a centralized, user-friendly, and stand-alone platform that includes an animal fixation frame, compact peripheral sensors, and a portable data acquisition system.

View Article and Find Full Text PDF

Infrared array sensor-based fall detection and activity recognition systems have gained momentum as promising solutions for enhancing healthcare monitoring and safety in various environments. Unlike camera-based systems, which can be privacy-intrusive, IR array sensors offer a non-invasive, reliable approach for fall detection and activity recognition while preserving privacy. This work proposes a novel method to distinguish between normal motion and fall incidents by analyzing thermal patterns captured by infrared array sensors.

View Article and Find Full Text PDF

Visual sensors, including 3D light detection and ranging, neuromorphic dynamic vision sensor, and conventional frame cameras, are increasingly integrated into edge-side intelligent machines. However, their data are heterogeneous, causing complexity in system development. Moreover, conventional digital hardware is constrained by von Neumann bottleneck and the physical limit of transistor scaling.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!