This paper presents a new deep-learning architecture designed to enhance the spatial synchronization between CMOS and event cameras by harnessing their complementary characteristics. While CMOS cameras produce high-quality imagery, they struggle in rapidly changing environments-a limitation that event cameras overcome due to their superior temporal resolution and motion clarity. However, effective integration of these two technologies relies on achieving precise spatial alignment, a challenge unaddressed by current algorithms. Our architecture leverages a dynamic graph convolutional neural network (DGCNN) to process event data directly, improving synchronization accuracy. We found that synchronization precision strongly correlates with the spatial concentration and density of events, with denser distributions yielding better alignment results. Our empirical results demonstrate that areas with denser event clusters enhance calibration accuracy, with calibration errors increasing in more uniformly distributed event scenarios. This research pioneers scene-based synchronization between CMOS and event cameras, paving the way for advancements in mixed-modality visual systems. The implications are significant for applications requiring detailed visual and temporal information, setting new directions for the future of visual perception technologies.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC11244468 | PMC |
http://dx.doi.org/10.3390/s24134050 | DOI Listing |
Front Robot AI
January 2025
IDLab, Ghent University-imec, Ghent, Belgium.
Smart cities deploy various sensors such as microphones and RGB cameras to collect data to improve the safety and comfort of the citizens. As data annotation is expensive, self-supervised methods such as contrastive learning are used to learn audio-visual representations for downstream tasks. Focusing on surveillance data, we investigate two common limitations of audio-visual contrastive learning: false negatives and the minimal sufficient information bottleneck.
View Article and Find Full Text PDFJ Geriatr Phys Ther
January 2025
Department of Physical Therapy, University of St. Augustine for Health Sciences, St. Augustine, Florida.
Background And Purpose: Physical therapists play a vital role in preventing and managing falls in older adults. With advancements in digital health and technology, community fall prevention programs need to adopt valid and reliable telehealth-based assessments. The purpose of this study was to evaluate the validity and reliability of the telehealth-based timed up and go (TUG) test, 30-second chair stand test (30s-CST), and four-stage (4-stage) balance test as functional components of the Stopping Elderly Accidents, Deaths, and Injuries (STEADI) fall risk assessment.
View Article and Find Full Text PDFNeurophotonics
January 2025
Northeastern University, Department of Bioengineering, Boston, Massachusetts, United States.
Significance: Functional brain imaging experiments in awake animals require meticulous monitoring of animal behavior to screen for spontaneous behavioral events. Although these events occur naturally, they can alter cell signaling and hemodynamic activity in the brain and confound functional brain imaging measurements.
Aim: We developed a centralized, user-friendly, and stand-alone platform that includes an animal fixation frame, compact peripheral sensors, and a portable data acquisition system.
Sensors (Basel)
January 2025
Faculty of Science and Engineering, Saga University, Saga 840-8502, Japan.
Infrared array sensor-based fall detection and activity recognition systems have gained momentum as promising solutions for enhancing healthcare monitoring and safety in various environments. Unlike camera-based systems, which can be privacy-intrusive, IR array sensors offer a non-invasive, reliable approach for fall detection and activity recognition while preserving privacy. This work proposes a novel method to distinguish between normal motion and fall incidents by analyzing thermal patterns captured by infrared array sensors.
View Article and Find Full Text PDFNat Commun
January 2025
Key Lab of Fabrication Technologies for Integrated Circuits Institute of Microelectronics, Chinese Academy of Sciences, 100029, Beijing, China.
Visual sensors, including 3D light detection and ranging, neuromorphic dynamic vision sensor, and conventional frame cameras, are increasingly integrated into edge-side intelligent machines. However, their data are heterogeneous, causing complexity in system development. Moreover, conventional digital hardware is constrained by von Neumann bottleneck and the physical limit of transistor scaling.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!