Taming chimeras in coupled oscillators using soft actor-critic based reinforcement learning.

Chaos

Complex Systems Group, Department of Mathematics and Statistics, The University of Western Australia, Crawley, Western Australia 6009, Australia.

Published: January 2025

We propose a universal method based on deep reinforcement learning (specifically, soft actor-critic) to control the chimera state in the coupled oscillators. The policy for control is learned by maximizing the expectation of the cumulative reward in the reinforcement learning framework. With the aid of the local order parameter, we design a class of reward functions for controlling the chimera state, specifically confining the spatial position of coherent and incoherent domains to any desired lateral position of oscillators. The proposed method is model-free, in contrast to the control schemes that require complete knowledge of the system equations. We test the method on the locally coupled Kuramoto oscillators and the nonlocally coupled FitzHugh-Nagumo model. Results show that the control is independent of initial conditions and coupling schemes. Not only the single-headed chimera, but also the multi-headed chimera and even the alternating chimera can be obtained by the method, and only the desired position needs to be changed. Beyond that, we discuss the influence of hyper-parameters, demonstrate the universality of the method to network sizes, and show that the proposed method can stabilize the drift of chimera and prevent its collapse in small networks.

Download full-text PDF

Source
http://dx.doi.org/10.1063/5.0219748DOI Listing

Publication Analysis

Top Keywords

reinforcement learning
12
coupled oscillators
8
soft actor-critic
8
chimera state
8
proposed method
8
method
6
chimera
6
taming chimeras
4
coupled
4
chimeras coupled
4

Similar Publications

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!