Actor-critic (AC) learning control architecture has been regarded as an important framework for reinforcement learning (RL) with continuous states and actions. In order to improve learning efficiency and convergence property, previous works have been mainly devoted to solve regularization and feature learning problem in the policy evaluation. In this article, we propose a novel AC learning control method with regularization and feature selection for policy gradient estimation in the actor network. The main contribution is that l -regularization is used on the actor network to achieve the function of feature selection. In each iteration, policy parameters are updated by the regularized dual-averaging (RDA) technique, which solves a minimization problem that involves two terms: one is the running average of the past policy gradients and the other is the l -regularization term of policy parameters. Our algorithm can efficiently calculate the solution of the minimization problem, and we call the new adaptation of policy gradient RDA-policy gradient (RDA-PG). The proposed RDA-PG can learn stochastic and deterministic near-optimal policies. The convergence of the proposed algorithm is established based on the theory of two-timescale stochastic approximation. The simulation and experimental results show that RDA-PG performs feature selection successfully in the actor and learns sparse representations of the actor both in stochastic and deterministic cases. RDA-PG performs better than existing AC algorithms on standard RL benchmark problems with irrelevant features or redundant features.

Download full-text PDF

Source
http://dx.doi.org/10.1109/TNNLS.2020.2981377DOI Listing

Publication Analysis

Top Keywords

feature selection
16
learning control
12
regularization feature
12
policy gradient
12
actor-critic learning
8
selection policy
8
gradient estimation
8
actor network
8
policy parameters
8
minimization problem
8

Similar Publications

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!