Marker-less hand-eye calibration permits the acquisition of an accurate transformation between an optical sensor and a robot in unstructured environments. Single monocular cameras, despite their low cost and modest computation requirements, present difficulties for this purpose due to their incomplete correspondence of projected coordinates. In this work, we introduce a hand-eye calibration procedure based on the rotation representations inferred by an augmented autoencoder neural network. Learning-based models that attempt to directly regress the spatial transform of objects such as the links of robotic manipulators perform poorly in the orientation domain, but this can be overcome through the analysis of the latent space vectors constructed in the autoencoding process. This technique is computationally inexpensive and can be run in real time in markedly varied lighting and occlusion conditions. To evaluate the procedure, we use a color-depth camera and perform a registration step between the predicted and the captured point clouds to measure translation and orientation errors and compare the results to a baseline based on traditional checkerboard markers.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC11281015PMC
http://dx.doi.org/10.3390/s24144662DOI Listing

Publication Analysis

Top Keywords

hand-eye calibration
12
latent space
8
space representations
4
representations marker-less
4
marker-less realtime
4
realtime hand-eye
4
calibration marker-less
4
marker-less hand-eye
4
calibration permits
4
permits acquisition
4

Similar Publications

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!