A PHP Error was encountered

Severity: Warning

Message: file_get_contents(https://...@pubfacts.com&api_key=b8daa3ad693db53b1410957c26c9a51b4908&a=1): Failed to open stream: HTTP request failed! HTTP/1.1 429 Too Many Requests

Filename: helpers/my_audit_helper.php

Line Number: 176

Backtrace:

File: /var/www/html/application/helpers/my_audit_helper.php
Line: 176
Function: file_get_contents

File: /var/www/html/application/helpers/my_audit_helper.php
Line: 250
Function: simplexml_load_file_from_url

File: /var/www/html/application/helpers/my_audit_helper.php
Line: 1034
Function: getPubMedXML

File: /var/www/html/application/helpers/my_audit_helper.php
Line: 3152
Function: GetPubMedArticleOutput_2016

File: /var/www/html/application/controllers/Detail.php
Line: 575
Function: pubMedSearch_Global

File: /var/www/html/application/controllers/Detail.php
Line: 489
Function: pubMedGetRelatedKeyword

File: /var/www/html/index.php
Line: 316
Function: require_once

Taming Lagrangian chaos with multi-objective reinforcement learning. | LitMetric

Taming Lagrangian chaos with multi-objective reinforcement learning.

Eur Phys J E Soft Matter

Istituto dei Sistemi Complessi, CNR, Via dei Taurini 19, 00185, Rome, Italy.

Published: March 2023

AI Article Synopsis

  • The study focuses on optimizing the behavior of two active particles in a 2D flow by balancing their dispersion rate and control activation costs using multi-objective reinforcement learning (MORL).
  • MORL successfully generates a range of efficient solutions, known as the Pareto frontier, outperforming traditional heuristic strategies.
  • The findings reveal that there’s a specific range of decision-making time frames where reinforcement learning yields significant improvements, particularly emphasizing the need for better flow knowledge with larger decision times.

Article Abstract

We consider the problem of two active particles in 2D complex flows with the multi-objective goals of minimizing both the dispersion rate and the control activation cost of the pair. We approach the problem by means of multi-objective reinforcement learning (MORL), combining scalarization techniques together with a Q-learning algorithm, for Lagrangian drifters that have variable swimming velocity. We show that MORL is able to find a set of trade-off solutions forming an optimal Pareto frontier. As a benchmark, we show that a set of heuristic strategies are dominated by the MORL solutions. We consider the situation in which the agents cannot update their control variables continuously, but only after a discrete (decision) time, [Formula: see text]. We show that there is a range of decision times, in between the Lyapunov time and the continuous updating limit, where reinforcement learning finds strategies that significantly improve over heuristics. In particular, we discuss how large decision times require enhanced knowledge of the flow, whereas for smaller [Formula: see text] all a priori heuristic strategies become Pareto optimal.

Download full-text PDF

Source
http://dx.doi.org/10.1140/epje/s10189-023-00271-0DOI Listing

Publication Analysis

Top Keywords

reinforcement learning
12
multi-objective reinforcement
8
heuristic strategies
8
[formula text]
8
decision times
8
taming lagrangian
4
lagrangian chaos
4
chaos multi-objective
4
learning consider
4
consider problem
4

Similar Publications

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!