This dataset was collected during the 2023 and 2024 RoboCup competitions using the TIAGo robot equipped with an RGB-D camera, a Hokuyo laser, and a RODE microphone. The dataset includes ROSbag files that capture the robot's sensory data and planning task behavior, as well as video recordings that provide third-person perspectives of task execution. These data provide information about the performance of autonomous robots in social tasks and navigation in dynamic environments with human interaction.
View Article and Find Full Text PDFNavigating robots with precision in complex environments remains a significant challenge. In this article, we present an innovative approach to enhance robot localization in dynamic and intricate spaces like homes and offices. We leverage Visual Question Answering (VQA) techniques to integrate semantic insights into traditional mapping methods, formulating a novel position hypothesis generation to assist localization methods, while also addressing challenges related to mapping accuracy and localization reliability.
View Article and Find Full Text PDF