Inspired by progresses in cognitive science, artificial intelligence, computer vision, and mobile computing technologies, we propose and implement a wearable virtual usher for cognitive indoor navigation based on egocentric visual perception. A novel computational framework of cognitive wayfinding in an indoor environment is proposed, which contains a context model, a route model, and a process model. A hierarchical structure is proposed to represent the cognitive context knowledge of indoor scenes.
View Article and Find Full Text PDFDuring wayfinding in a novel environment, we encounter many new places. Some of those places are encoded by our spatial memory. But how does the human brain "decides" which locations are more important than others, and how do backtracking and repetition priming enhances memorization of these scenes? In this work, we explore how backtracking improves encoding of encountered locations.
View Article and Find Full Text PDF