This paper proposes a novel method for motion multi-object matching and position estimation in the absence of salient features based on unsynchronized image sequences. Our proposed method aims to address the issues of traditional feature matching that requires static objects, salient features and the need for synchronized images when the epipolar constraint is used. Firstly, unsynchronized image sequences are captured using three calibrated cameras, and for each motion object, three spatial planes are established using multi-images. Each pair of spatial planes determines the candidate trajectory of the very object and calculates the candidate position at a specific height. Subsequently, a candidate position matrix for multiple objects between the first camera and the second camera is obtained, as well as another candidate position matrix between the first camera and the third camera. Then, based on the principle of minimum distances for motion multi-object matching, a flexible search method between the two candidate position matrices is established to calculate the distances and achieve multi-object matching at the minimum distances. According to the matching results, a new method for position estimation based on line-plane constraint is established. Finally, synthetic data and real images are used to experimentally test the performance of our proposed method, and it is compared with the matching algorithm under synchronized images based on epipolar constraint. The experimental results show that our proposed method has better performance in noise sensitivity but is slower in computation speed.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC11882916PMC
http://dx.doi.org/10.1038/s41598-025-92237-9DOI Listing

Publication Analysis

Top Keywords

multi-object matching
16
candidate position
16
motion multi-object
12
position estimation
12
unsynchronized image
12
image sequences
12
proposed method
12
matching position
8
estimation based
8
based unsynchronized
8

Similar Publications

This paper proposes a novel method for motion multi-object matching and position estimation in the absence of salient features based on unsynchronized image sequences. Our proposed method aims to address the issues of traditional feature matching that requires static objects, salient features and the need for synchronized images when the epipolar constraint is used. Firstly, unsynchronized image sequences are captured using three calibrated cameras, and for each motion object, three spatial planes are established using multi-images.

View Article and Find Full Text PDF

Video data and algorithms have been driving advances in multi-object tracking (MOT). While existing MOT datasets focus on occlusion and appearance similarity, complex motion patterns are widespread yet overlooked. To address this issue, we introduce a new dataset called BEE24 to highlight complex motions.

View Article and Find Full Text PDF

As the global economy expands, waterway transportation has become increasingly crucial to the logistics sector. This growth presents both significant challenges and opportunities for enhancing the accuracy of ship detection and tracking through the application of artificial intelligence. This article introduces a multi-object tracking system designed for unmanned aerial vehicles (UAVs), utilizing the YOLOv7 and Deep SORT algorithms for detection and tracking, respectively.

View Article and Find Full Text PDF

SurgiTrack: Fine-grained multi-class multi-tool tracking in surgical videos.

Med Image Anal

April 2025

University of Strasbourg, CAMMA, ICube, CNRS, INSERM, France; IHU Strasbourg, Strasbourg, France.

Accurate tool tracking is essential for the success of computer-assisted intervention. Previous efforts often modeled tool trajectories rigidly, overlooking the dynamic nature of surgical procedures, especially tracking scenarios like out-of-body and out-of-camera views. Addressing this limitation, the new CholecTrack20 dataset provides detailed labels that account for multiple tool trajectories in three perspectives: (1) intraoperative, (2) intracorporeal, and (3) visibility, representing the different types of temporal duration of tool tracks.

View Article and Find Full Text PDF
Article Synopsis
  • Drone aerial imaging is becoming crucial due to advancements in optical sensor technology, but efficient multi-object tracking remains a challenge, especially with traditional methods that separate identification from tracking.
  • A new Transformer-based framework is proposed, which integrates object detection and tracking into one process using self-attention mechanisms, simplifying the tracking pipeline and improving performance significantly.
  • The system employs innovative techniques like trajectory detection label matching and cross-frame self-attention to enhance tracking accuracy and stability, with experimental results validating its effectiveness on datasets like VisDrone and UAVDT.
View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!