Falling is an emergency situation that can result in serious injury or even death, especially in the absence of immediate assistance. Therefore, developing a model that can accurately and promptly detect falls is crucial for enhancing quality of life and safety. In the field of object detection, while YOLOv8 has recently made notable strides in detection accuracy and speed, it still faces challenges in detecting falls due to variations in lighting, occlusions, and complex human postures. To address these issues, this study proposes the SDES-YOLO model, an improvement based on YOLOv8. By incorporating a multi-scale feature extraction pyramid (SDFP), occlusion-aware attention mechanism (SEAM), an edge and spatial information fusion module (ES3), and a WIoU-Shape loss function, the SDES-YOLO model significantly enhances fall detection performance in complex scenarios. With only 2.9M parameters and 7.2 GFLOPs of computation, SDES-YOLO achieves an mAP@0.5 of 85.1%, representing a 3.41% improvement over YOLOv8n, while reducing parameter count and computation by 1.33% and 11.11%, respectively. These results indicate that SDES-YOLO successfully combines efficiency and precision in fall detection. Through these innovations, SDES-YOLO not only improves detection accuracy but also optimizes computational efficiency, making it effective even in resource-constrained environments.

Download full-text PDF

Source
http://dx.doi.org/10.1038/s41598-025-86593-9DOI Listing

Publication Analysis

Top Keywords

fall detection
12
detection accuracy
8
sdes-yolo model
8
sdes-yolo
6
detection
6
sdes-yolo high-precision
4
high-precision lightweight
4
model
4
lightweight model
4
model fall
4

Similar Publications

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!