We propose a universal method based on deep reinforcement learning (specifically, soft actor-critic) to control the chimera state in the coupled oscillators. The policy for control is learned by maximizing the expectation of the cumulative reward in the reinforcement learning framework. With the aid of the local order parameter, we design a class of reward functions for controlling the chimera state, specifically confining the spatial position of coherent and incoherent domains to any desired lateral position of oscillators. The proposed method is model-free, in contrast to the control schemes that require complete knowledge of the system equations. We test the method on the locally coupled Kuramoto oscillators and the nonlocally coupled FitzHugh-Nagumo model. Results show that the control is independent of initial conditions and coupling schemes. Not only the single-headed chimera, but also the multi-headed chimera and even the alternating chimera can be obtained by the method, and only the desired position needs to be changed. Beyond that, we discuss the influence of hyper-parameters, demonstrate the universality of the method to network sizes, and show that the proposed method can stabilize the drift of chimera and prevent its collapse in small networks.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1063/5.0219748 | DOI Listing |
Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!