Visual object tracking is a fundamental task in computer vision that requires estimating the position and scale of a target object in a video sequence. However, scale variation is a difficult challenge that affects the performance and robustness of many trackers, especially those based on the discriminative correlation filter (DCF). Existing scale estimation methods based on multi-scale features are computationally expensive and degrade the real-time performance of the DCF-based tracker, especially in scenarios with restricted computing power. In this paper, we propose a practical and efficient solution that can handle scale changes without using multi-scale features and can be combined with any DCF-based tracker as a plug-in module. We use color name (CN) features and a salient feature to reduce the target appearance model's dimensionality. We then estimate the target scale based on a Gaussian distribution model and introduce global and local scale consistency assumptions to restore the target's scale. We fuse the tracking results with the DCF-based tracker to obtain the new position and scale of the target. We evaluate our method on the benchmark dataset Temple Color 128 and compare it with some popular trackers. Our method achieves competitive accuracy and robustness while significantly reducing the computational cost.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC10490814PMC
http://dx.doi.org/10.3390/s23177516DOI Listing

Publication Analysis

Top Keywords

dcf-based tracker
12
scale
8
position scale
8
scale target
8
multi-scale features
8
scale-aware tracking
4
tracking method
4
method appearance
4
appearance feature
4
feature filtering
4

Similar Publications

Visual object tracking is a fundamental task in computer vision that requires estimating the position and scale of a target object in a video sequence. However, scale variation is a difficult challenge that affects the performance and robustness of many trackers, especially those based on the discriminative correlation filter (DCF). Existing scale estimation methods based on multi-scale features are computationally expensive and degrade the real-time performance of the DCF-based tracker, especially in scenarios with restricted computing power.

View Article and Find Full Text PDF

The discriminative correlation filter (DCF)-based tracking method has shown good accuracy and efficiency in visual tracking. However, the periodic assumption of sample space causes unwanted boundary effects, restricting the tracker's ability to distinguish between the target and background. Additionally, in the real tracking environment, interference factors such as occlusion, background clutter, and illumination changes cause response aberration and, thus, tracking failure.

View Article and Find Full Text PDF

With the advantages of discriminative correlation filter (DCF) in tracking accuracy and computational efficiency, the DCF-based methods have been widely used in the field of unmanned aerial vehicles (UAV) for target tracking. However, UAV tracking inevitably encounters various challenging scenarios, such as background clutter, similar target, partial/full occlusion, fast motion, etc. These challenges generally lead to multi-peak interferences in the response map that cause the target drift or even loss.

View Article and Find Full Text PDF

To ensure that computers can accomplish specific tasks intelligently and autonomously, it is common to introduce more knowledge into artificial intelligence (AI) technology as prior information, by imitating the structure and mindset of the human brain. Currently, unmanned aerial vehicle (UAV) tracking plays an important role in military and civilian fields. However, robust and accurate UAV tracking remains a demanding task, due to limited computing capability, unanticipated object appearance variations, and a volatile environment.

View Article and Find Full Text PDF

Discriminative correlation filter (DCF) tracking algorithms are commonly used for visual tracking. However, we observed that different spatio-temporal targets exhibit varied visual appearances, and most DCF-based trackers neglect to exploit this spatio-temporal information during the tracking process. To address the above-mentioned issues, we propose a three-way adaptive spatio-temporal correlation filtering tracker, named ASCF, that makes fuller use of the spatio-temporal information during tracking.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!