Using a map in an unfamiliar environment requires identifying correspondences between elements of the map's allocentric representation and elements in egocentric views. Aligning the map with the environment can be challenging. Virtual reality (VR) allows learning about unfamiliar environments in a sequence of egocentric views that correspond closely to the perspectives and views that are experienced in the actual environment. We compared three methods to prepare for localization and navigation tasks performed by teleoperating a robot in an office building: studying a floor plan of the building and two forms of VR exploration. One group of participants studied a building plan, a second group explored a faithful VR reconstruction of the building from a normal-sized avatar's perspective, and a third group explored the VR from a giant-sized avatar's perspective. All methods contained marked checkpoints. The subsequent tasks were identical for all groups. The self-localization task required indication of the approximate location of the robot in the environment. The navigation task required navigation between checkpoints. Participants took less time to learn with the giant VR perspective and with the floorplan than with the normal VR perspective. Both VR learning methods significantly outperformed the floorplan in the orientation task. Navigation was performed quicker after learning in the giant perspective compared to the normal perspective and the building plan. We conclude that the normal perspective and especially the giant perspective in VR are viable options for preparing for teleoperation in unfamiliar environments when a virtual model of the environment is available.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1109/TVCG.2023.3247052 | DOI Listing |
Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!