In this data article, we introduce the Multi-Modal Event-based Vehicle Detection and Tracking (MEVDT) dataset. This dataset provides a synchronized stream of event data and grayscale images of traffic scenes, captured using the Dynamic and Active-Pixel Vision Sensor (DAVIS) 240c hybrid event-based camera. MEVDT comprises 63 multi-modal sequences with approximately 13k images, 5M events, 10k object labels, and 85 unique object tracking trajectories.
View Article and Find Full Text PDFRoad conditions, often degraded by insufficient maintenance or adverse weather, significantly contribute to accidents, exacerbated by the limited human reaction time to sudden hazards like potholes. Early detection of distant potholes is crucial for timely corrective actions, such as reducing speed or avoiding obstacles, to mitigate vehicle damage and accidents. This paper introduces a novel approach that utilizes perspective transformation to enhance pothole detection at different distances, focusing particularly on distant potholes.
View Article and Find Full Text PDFDespite significant strides in achieving vehicle autonomy, robust perception under low-light conditions still remains a persistent challenge. In this study, we investigate the potential of multispectral imaging, thereby leveraging deep learning models to enhance object detection performance in the context of nighttime driving. Features encoded from the red, green, and blue (RGB) visual spectrum and thermal infrared images are combined to implement a multispectral object detection model.
View Article and Find Full Text PDFEvent-based vision is an emerging field of computer vision that offers unique properties, such as asynchronous visual output, high temporal resolutions, and dependence on brightness changes, to generate data. These properties can enable robust high-temporal-resolution object detection and tracking when combined with frame-based vision. In this paper, we present a hybrid, high-temporal-resolution object detection and tracking approach that combines learned and classical methods using synchronized images and event data.
View Article and Find Full Text PDFUtilizing military convoys in humanitarian missions allows for increased overall performance of healthcare logistical operations. To properly gauge performance of autonomous ground convoy systems in military humanitarian operations, a proper framework for comparative performance metrics needs to be established. Past efforts in this domain have had heavy focus on narrow and specialized areas of convoy performance such as human factors, trust metrics, or string stability analysis.
View Article and Find Full Text PDFResearch on the effect of adverse weather conditions on the performance of vision-based algorithms for automotive tasks has had significant interest. It is generally accepted that adverse weather conditions reduce the quality of captured images and have a detrimental effect on the performance of algorithms that rely on these images. Rain is a common and significant source of image quality degradation.
View Article and Find Full Text PDFVision-based motion estimation is an effective means for mobile robot localization and is often used in conjunction with other sensors for navigation and path planning. This paper presents a low-overhead real-time ego-motion estimation (visual odometry) system based on either a stereo or RGB-D sensor. The algorithm's accuracy outperforms typical frame-to-frame approaches by maintaining a limited local map, while requiring significantly less memory and computational power in contrast to using global maps common in full visual SLAM methods.
View Article and Find Full Text PDFBody-worn inertial sensors have enabled motion capture outside of the laboratory setting. In this work, an inertial measurement unit was attached to the upper arm to track and discriminate between shoulder motion gestures in order to help prevent shoulder over-use injuries in athletics through real-time preventative feedback. We present a detection and classification approach that can be used to count the number of times certain motion gestures occur.
View Article and Find Full Text PDF