Bio-inspired visual self-localization in real world scenarios using Slow Feature Analysis.

PLoS One

Faculty of Computer Science and Engineering, Frankfurt University of Applied Sciences, Frankfurt am Main, Hessen, Germany.

Published: March 2019

We present a biologically motivated model for visual self-localization which extracts a spatial representation of the environment directly from high dimensional image data by employing a single unsupervised learning rule. The resulting representation encodes the position of the camera as slowly varying features while being invariant to its orientation resembling place cells in a rodent's hippocampus. Using an omnidirectional mirror allows to manipulate the image statistics by adding simulated rotational movement for improved orientation invariance. We apply the model in indoor and outdoor experiments and, for the first time, compare its performance against two state of the art visual SLAM methods. Results of the experiments show that the proposed straightforward model enables a precise self-localization with accuracies in the range of 13-33cm demonstrating its competitiveness to the established SLAM methods in the tested scenarios.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6150500PMC
http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0203994PLOS

Publication Analysis

Top Keywords

visual self-localization
8
slam methods
8
bio-inspired visual
4
self-localization real
4
real scenarios
4
scenarios slow
4
slow feature
4
feature analysis
4
analysis biologically
4
biologically motivated
4

Similar Publications

In dynamic and unpredictable environments, the precise localization of first responders and rescuers is crucial for effective incident response. This paper introduces a novel approach leveraging three complementary localization modalities: visual-based, Galileo-based, and inertial-based. Each modality contributes uniquely to the final Fusion tool, facilitating seamless indoor and outdoor localization, offering a robust and accurate localization solution without reliance on pre-existing infrastructure, essential for maintaining responder safety and optimizing operational effectiveness.

View Article and Find Full Text PDF

Object-Oriented and Visual-Based Localization in Urban Environments.

Sensors (Basel)

March 2024

Department of Electrical Engineering and Computer Science, University of California, Irvine, CA 92697, USA.

In visual-based localization, prior research falls short in addressing challenges for the Internet of Things with limited computational resources. The dominant state-of-the-art models are based on separate feature extractors and descriptors without consideration of the constraints of small hardware, the issue of inconsistent image scale, or the presence of multi-objects. We introduce "OOPose", a real-time object-oriented pose estimation framework that leverages dense features from off-the-shelf object detection neural networks.

View Article and Find Full Text PDF

Research on Inter-Frame Feature Mismatch Removal Method of VSLAM in Dynamic Scenes.

Sensors (Basel)

February 2024

Engineering Research and Design Institute of Agricultural Equipment, Hubei University of Technology, Wuhan 430068, China.

Visual Simultaneous Localization and Mapping (VSLAM) estimates the robot's pose in three-dimensional space by analyzing the depth variations of inter-frame feature points. Inter-frame feature point mismatches can lead to tracking failure, impacting the accuracy of the mobile robot's self-localization and mapping. This paper proposes a method for removing mismatches of image features in dynamic scenes in visual SLAM.

View Article and Find Full Text PDF

A visual questioning answering approach to enhance robot localization in indoor environments.

Front Neurorobot

November 2023

Intelligent Robotics Lab, Signal Theory, Communications, Telematics Systems, and Computation Department, Rey Juan Carlos University, Fuenlabrada, Spain.

Navigating robots with precision in complex environments remains a significant challenge. In this article, we present an innovative approach to enhance robot localization in dynamic and intricate spaces like homes and offices. We leverage Visual Question Answering (VQA) techniques to integrate semantic insights into traditional mapping methods, formulating a novel position hypothesis generation to assist localization methods, while also addressing challenges related to mapping accuracy and localization reliability.

View Article and Find Full Text PDF

Effective self-localization requires that the brain can resolve ambiguities in incoming sensory information arising from self-similarities (symmetries) in the environment structure. We investigated how place cells use environmental cues to resolve the ambiguity of a rotationally symmetric environment, by recording from hippocampal CA1 in rats exploring a "2-box." This apparatus comprises two adjacent rectangular compartments, identical but with directionally opposed layouts (cue card at one end and central connecting doorway) and distinguished by their odor contexts (lemon vs.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!