This brief paper provides an approximate online adaptive solution to the infinite-horizon optimal tracking problem for control-affine continuous-time nonlinear systems with unknown drift dynamics. To relax the persistence of excitation condition, model-based reinforcement learning is implemented using a concurrent-learning-based system identifier to simulate experience by evaluating the Bellman error over unexplored areas of the state space. Tracking of the desired trajectory and convergence of the developed policy to a neighborhood of the optimal policy are established via Lyapunov-based stability analysis. Simulation results demonstrate the effectiveness of the developed technique.

Download full-text PDF

Source
http://dx.doi.org/10.1109/TNNLS.2015.2511658DOI Listing

Publication Analysis

Top Keywords

model-based reinforcement
8
reinforcement learning
8
optimal tracking
8
learning infinite-horizon
4
infinite-horizon approximate
4
approximate optimal
4
tracking paper
4
paper approximate
4
approximate online
4
online adaptive
4

Similar Publications

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!