The integration of robotics in the garment industry remains relatively limited, primarily due to the challenges in the highly deformable nature of garments. The objective of this study is thus to explore a vision-based garment recognition and environment reconstruction model to facilitate the application of robots in garment processing. Object SLAM (Simultaneous Localization and Mapping) was employed as the core methodology for real-time mapping and tracking. To enable garment detection and reconstruction, two datasets were created: a 2D garment image dataset for instance segmentation model training and a synthetic 3D mesh garment dataset to enhance the DeepSDF (Signed Distance Function) model for generative garment reconstruction. In addition to garment detection, the SLAM system was extended to identify and reconstruct environmental planes, using the CAPE (Cylinder and Plane Extraction) model. The implementation was tested using an Intel Realsense camera, demonstrating the feasibility of simultaneous garment and plane detection and reconstruction. This study shows improved performance in garment recognition with the 2D instance segmentation models and an enhanced understanding of garment shapes and structures with the DeepSDF model. The integration of CAPE plane detection with SLAM allows for more robust environment reconstruction that is capable of handling multiple objects. The implementation and evaluation of the system highlight its potential for enhancing automation and efficiency in the garment processing industry.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.3390/s24237622 | DOI Listing |
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC11644764 | PMC |
Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!