In agriculture, specifically livestock monitoring, drones' ability to track multiple targets is essential for advancing the field. However, limited computing resources and unpredictable drone movements often cause issues like ambiguous video frames, object obstructions, and size deviations. These inconsistencies reduce tracking accuracy, making traditional algorithms inadequate for handling drone footage. This study introduces an enhanced deep learning-based multi-target drone tracker framework that enables real-time processing. The proposed method combines object detection and tracking by leveraging consecutive frame pairs to extract and share features, enhancing computational efficiency. It employs diverse loss functions to address class and sample distribution imbalances and includes a composite deblurring module to enhance detection accuracy. Object association utilizes a dual regress bounding box technique, aiding in object identification verification and predictive motion. Live tracking is achieved by predicting object locations in subsequent frames, enabling real-time tracking. Evaluation against leading benchmarks shows that the system improves precision and speed, achieving a 4.3 % increase in Multi-Object Tracking Accuracy (MOTA) and a 7.7 % boost in F1 score.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC11471501 | PMC |
http://dx.doi.org/10.1016/j.heliyon.2024.e38316 | DOI Listing |
Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!