Severity: Warning
Message: file_get_contents(https://...@pubfacts.com&api_key=b8daa3ad693db53b1410957c26c9a51b4908): Failed to open stream: HTTP request failed! HTTP/1.1 429 Too Many Requests
Filename: helpers/my_audit_helper.php
Line Number: 144
Backtrace:
File: /var/www/html/application/helpers/my_audit_helper.php
Line: 144
Function: file_get_contents
File: /var/www/html/application/helpers/my_audit_helper.php
Line: 212
Function: simplexml_load_file_from_url
File: /var/www/html/application/helpers/my_audit_helper.php
Line: 3106
Function: getPubMedXML
File: /var/www/html/application/controllers/Detail.php
Line: 575
Function: pubMedSearch_Global
File: /var/www/html/application/controllers/Detail.php
Line: 489
Function: pubMedGetRelatedKeyword
File: /var/www/html/index.php
Line: 316
Function: require_once
Deep reinforcement learning (DRL) and deep multiagent reinforcement learning (MARL) have achieved significant success across a wide range of domains, including game artificial intelligence (AI), autonomous vehicles, and robotics. However, DRL and deep MARL agents are widely known to be sample inefficient that millions of interactions are usually needed even for relatively simple problem settings, thus preventing the wide application and deployment in real-industry scenarios. One bottleneck challenge behind is the well-known exploration problem, i.e., how efficiently exploring the environment and collecting informative experiences that could benefit policy learning toward the optimal ones. This problem becomes more challenging in complex environments with sparse rewards, noisy distractions, long horizons, and nonstationary co-learners. In this article, we conduct a comprehensive survey on existing exploration methods for both single-agent RL and multiagent RL. We start the survey by identifying several key challenges to efficient exploration. Then, we provide a systematic survey of existing approaches by classifying them into two major categories: uncertainty-oriented exploration and intrinsic motivation-oriented exploration. Beyond the above two main branches, we also include other notable exploration methods with different ideas and techniques. In addition to algorithmic analysis, we provide a comprehensive and unified empirical comparison of different exploration methods for DRL on a set of commonly used benchmarks. According to our algorithmic and empirical investigation, we finally summarize the open problems of exploration in DRL and deep MARL and point out a few future directions.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1109/TNNLS.2023.3236361 | DOI Listing |
Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!