Network Architecture for Optimizing Deep Deterministic Policy Gradient Algorithms.

Comput Intell Neurosci

School of Computer Science and Technology, Soochow University, Shizi Street 1, Suzhou 215006, China.

Published: November 2022

The traditional Deep Deterministic Policy Gradient (DDPG) algorithm has been widely used in continuous action spaces, but it still suffers from the problems of easily falling into local optima and large error fluctuations. Aiming at these deficiencies, this paper proposes a dual-actor-dual-critic DDPG algorithm (DN-DDPG). First, on the basis of the original actor-critic network architecture of the algorithm, a critic network is added to assist the training, and the smallest value of the two critic networks is taken as the estimated value of the action in each update. Reduce the probability of local optimal phenomenon; then, introduce the idea of dual-actor network to alleviate the underestimation of value generated by dual-evaluator network, and select the action with the greatest value in the two-actor networks to update to stabilize the training of the algorithm process. Finally, the improved method is validated on four continuous action tasks provided by MuJoCo, and the results show that the improved method can reduce the fluctuation range of error and improve the cumulative return compared with the classical algorithm.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC9699738PMC
http://dx.doi.org/10.1155/2022/1117781DOI Listing

Publication Analysis

Top Keywords

network architecture
8
deep deterministic
8
deterministic policy
8
policy gradient
8
ddpg algorithm
8
continuous action
8
improved method
8
network
5
algorithm
5
architecture optimizing
4

Similar Publications

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!