Robustness Improvement of Visual Templates Matching Based on Frequency-Tuned Model in RatSLAM.

Front Neurorobot

School of Mechanical and Electrical Engineering, Soochow University, Suzhou, China.

Published: September 2020

This paper describes an improved brain-inspired simultaneous localization and mapping (RatSLAM) that extracts visual features from saliency maps using a frequency-tuned (FT) model. In the traditional RatSLAM algorithm, the visual template feature is organized as a one-dimensional vector whose values only depend on pixel intensity; therefore, this feature is susceptible to changes in illumination intensity. In contrast to this approach, which directly generates visual templates from raw RGB images, we propose an FT model that converts RGB images into saliency maps to obtain visual templates. The visual templates extracted from the saliency maps contain more of the feature information contained within the original images. Our experimental results demonstrate that the accuracy of loop closure detection was improved, as measured by the number of loop closures detected by our method compared with the traditional RatSLAM system. We additionally verified that the proposed FT model-based visual templates improve the robustness of familiar visual scene identification by RatSLAM.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7546858PMC
http://dx.doi.org/10.3389/fnbot.2020.568091DOI Listing

Publication Analysis

Top Keywords

visual templates
20
saliency maps
12
visual
8
frequency-tuned model
8
traditional ratslam
8
rgb images
8
templates
5
ratslam
5
robustness improvement
4
improvement visual
4

Similar Publications

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!