This work presents a novel transformer-based method for hand pose estimation-DePOTR. We test the DePOTR method on four benchmark datasets, where DePOTR outperforms other transformer-based methods while achieving results on par with other state-of-the-art methods. To further demonstrate the strength of DePOTR, we propose a novel multi-stage approach from full-scene depth image-MuTr. MuTr removes the necessity of having two different models in the hand pose estimation pipeline-one for hand localization and one for pose estimation-while maintaining promising results. To the best of our knowledge, this is the first successful attempt to use the same model architecture in standard and simultaneously in full-scene image setup while achieving competitive results in both of them. On the NYU dataset, DePOTR and MuTr reach precision equal to 7.85 mm and 8.71 mm, respectively.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC10305187 | PMC |
http://dx.doi.org/10.3390/s23125509 | DOI Listing |
Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!