We combine three threads of research on approximate dynamic programming: sparse random sampling of states, value function and policy approximation using local models, and using local trajectory optimizers to globally optimize a policy and associated value function. Our focus is on finding steady-state policies for deterministic time-invariant discrete time control problems with continuous states and actions often found in robotics. In this paper, we describe our approach and provide initial results on several simulated robotics problems.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1109/TSMCB.2008.926610 | DOI Listing |
Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!