Hunting active Brownian particles: Learning optimal behavior.

Phys Rev E

Institut für Physik, Johannes Gutenberg-Universität Mainz, Staudingerweg 7-9, 55128 Mainz, Germany.

Published: November 2021

We numerically study active Brownian particles that can respond to environmental cues through a small set of actions (switching their motility and turning left or right with respect to some direction) which are motivated by recent experiments with colloidal self-propelled Janus particles. We employ reinforcement learning to find optimal mappings between the state of particles and these actions. Specifically, we first consider a predator-prey situation in which prey particles try to avoid a predator. Using as reward the squared distance from the predator, we discuss the merits of three state-action sets and show that turning away from the predator is the most successful strategy. We then remove the predator and employ as collective reward the local concentration of signaling molecules exuded by all particles and show that aligning with the concentration gradient leads to chemotactic collapse into a single cluster. Our results illustrate a promising route to obtain local interaction rules and design collective states in active matter.

Download full-text PDF

Source
http://dx.doi.org/10.1103/PhysRevE.104.054614DOI Listing

Publication Analysis

Top Keywords

active brownian
8
brownian particles
8
particles
6
hunting active
4
particles learning
4
learning optimal
4
optimal behavior
4
behavior numerically
4
numerically study
4
study active
4

Similar Publications

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!