Mobile robots exploration through cnn-based reinforcement learning.

Robotics Biomim

Department of Mechanical and Biomedical Engineering, City University of Hong Kong, Tat Chee Avenue, Kowloon Tong, 999077 Hong Kong ; Department of Electronic and Computer Engineering, HKUST, Clear Water Bay, Kowloon, 999077 Hong Kong.

Published: December 2016

Exploration in an unknown environment is an elemental application for mobile robots. In this paper, we outlined a reinforcement learning method aiming for solving the exploration problem in a corridor environment. The learning model took the depth image from an RGB-D sensor as the only input. The feature representation of the depth image was extracted through a pre-trained convolutional-neural-networks model. Based on the recent success of deep Q-network on artificial intelligence, the robot controller achieved the exploration and obstacle avoidance abilities in several different simulated environments. It is the first time that the reinforcement learning is used to build an exploration strategy for mobile robots through raw sensor information.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC5177670PMC
http://dx.doi.org/10.1186/s40638-016-0055-xDOI Listing

Publication Analysis

Top Keywords

mobile robots
12
reinforcement learning
12
depth image
8
exploration
5
robots exploration
4
exploration cnn-based
4
cnn-based reinforcement
4
learning
4
learning exploration
4
exploration unknown
4

Similar Publications

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!