A considerable number of vehicular accidents occur in low-millage zones like school streets, neighborhoods, and parking lots, among others. Therefore, the proposed work aims to provide a novel ADAS system to warn about dangerous scenarios by analyzing the driver's attention and the corresponding distances between the vehicle and the detected object on the road. This approach is made possible by concurrent Head Pose Estimation (HPE) and Object/Pedestrian Detection. Both approaches have shown independently their viable application in the automotive industry to decrease the number of vehicle collisions. The proposed system takes advantage of stereo vision characteristics for HPE by enabling the computation of the Euler Angles with a low average error for classifying the driver's attention on the road using neural networks. For Object Detection, stereo vision is used to detect the distance between the vehicle and the approaching object; this is made with a state-of-the-art algorithm known as YOLO-R and a fast template matching technique known as SoRA that provides lower processing times. The result is an ADAS system designed to ensure adequate braking time, considering the driver's attention on the road and the distances to objects.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC11367059PMC
http://dx.doi.org/10.1016/j.heliyon.2024.e35929DOI Listing

Publication Analysis

Top Keywords

stereo vision
12
driver's attention
12
object detection
8
head pose
8
pose estimation
8
adas system
8
attention road
8
real-time vehicle
4
vehicle safety
4
system
4

Similar Publications

In the realm of 3D measurement, photometric stereo excels in capturing high-frequency details but suffers from accumulated errors that lead to low-frequency distortions in the reconstructed surface. Conversely, light field (LF) reconstruction provides satisfactory low-frequency geometry but sacrifices spatial resolution, impacting high-frequency detail quality. To tackle these challenges, we propose a photometric stereoscopic light field measurement (PSLFM) scheme that harnesses the strengths of both methods.

View Article and Find Full Text PDF

To address the challenges of high computational complexity and poor real-time performance in binocular vision-based Unmanned Aerial Vehicle (UAV) formation flight, this paper introduces a UAV localization algorithm based on a lightweight object detection model. Firstly, we optimized the YOLOv5s model using lightweight design principles, resulting in Yolo-SGN. This model achieves a 65.

View Article and Find Full Text PDF

Artificial Visual System for Stereo-Orientation Recognition Based on Hubel-Wiesel Model.

Biomimetics (Basel)

January 2025

Institute of AI for Industries, Chinese Academy of Sciences, 168 Tianquan Road, Nanjing 211100, China.

Stereo-orientation selectivity is a fundamental neural mechanism in the brain that plays a crucial role in perception. However, due to the recognition process of high-dimensional spatial information commonly occurring in high-order cortex, we still know little about the mechanisms underlying stereo-orientation selectivity and lack a modeling strategy. A classical explanation for the mechanism of two-dimensional orientation selectivity within the primary visual cortex is based on the Hubel-Wiesel model, a cascading neural connection structure.

View Article and Find Full Text PDF

Residual Vision Transformer and Adaptive Fusion Autoencoders for Monocular Depth Estimation.

Sensors (Basel)

December 2024

Institute of Computer and Communication Engineering, Department of Electrical Engineering, National Cheng Kung University, Tainan 701, Taiwan.

Precision depth estimation plays a key role in many applications, including 3D scene reconstruction, virtual reality, autonomous driving and human-computer interaction. Through recent advancements in deep learning technologies, monocular depth estimation, with its simplicity, has surpassed the traditional stereo camera systems, bringing new possibilities in 3D sensing. In this paper, by using a single camera, we propose an end-to-end supervised monocular depth estimation autoencoder, which contains an encoder with a structure with a mixed convolution neural network and vision transformers and an effective adaptive fusion decoder to obtain high-precision depth maps.

View Article and Find Full Text PDF
Article Synopsis
  • Accurate 3D information estimation from images is crucial for computer vision, and while binocular stereo vision is a common approach, it faces challenges with baseline distance affecting reliability.
  • This research proposes a new method that progressively increases the baseline in multiocular vision, introducing a rectification technique that significantly reduces distortion errors in the images.
  • The method enhances disparity estimation accuracy by 20% for multiocular images and demonstrates superior performance through extensive evaluations against existing methods.
View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!