Markovian robots: Minimal navigation strategies for active particles.

Phys Rev E

Université Côte d'Azur, Laboratoire J. A. Dieudonné, UMR 7351 CNRS, Parc Valrose, F-06108 Nice Cedex 02, France.

Published: April 2018

We explore minimal navigation strategies for active particles in complex, dynamical, external fields, introducing a class of autonomous, self-propelled particles which we call Markovian robots (MR). These machines are equipped with a navigation control system (NCS) that triggers random changes in the direction of self-propulsion of the robots. The internal state of the NCS is described by a Boolean variable that adopts two values. The temporal dynamics of this Boolean variable is dictated by a closed Markov chain-ensuring the absence of fixed points in the dynamics-with transition rates that may depend exclusively on the instantaneous, local value of the external field. Importantly, the NCS does not store past measurements of this value in continuous, internal variables. We show that despite the strong constraints, it is possible to conceive closed Markov chain motifs that lead to nontrivial motility behaviors of the MR in one, two, and three dimensions. By analytically reducing the complexity of the NCS dynamics, we obtain an effective description of the long-time motility behavior of the MR that allows us to identify the minimum requirements in the design of NCS motifs and transition rates to perform complex navigation tasks such as adaptive gradient following, detection of minima or maxima, or selection of a desired value in a dynamical, external field. We put these ideas in practice by assembling a robot that operates by the proposed minimalistic NCS to evaluate the robustness of MR, providing a proof of concept that is possible to navigate through complex information landscapes with such a simple NCS whose internal state can be stored in one bit. These ideas may prove useful for the engineering of miniaturized robots.

Download full-text PDF

Source
http://dx.doi.org/10.1103/PhysRevE.97.042604DOI Listing

Publication Analysis

Top Keywords

markovian robots
8
minimal navigation
8
navigation strategies
8
strategies active
8
active particles
8
dynamical external
8
internal state
8
boolean variable
8
closed markov
8
transition rates
8

Similar Publications

General value functions for fault detection in multivariate time series data.

Front Robot AI

March 2024

Computing Science Department, Alberta Machine Intelligence Institute, University of Alberta, Edmonton, AB, Canada.

One of the greatest challenges to the automated production of goods is equipment malfunction. Ideally, machines should be able to automatically predict and detect operational faults in order to minimize downtime and plan for timely maintenance. While traditional condition-based maintenance (CBM) involves costly sensor additions and engineering, machine learning approaches offer the potential to learn from already existing sensors.

View Article and Find Full Text PDF

Cost-Utility Analysis of Open Radical Hysterectomy Compared to Minimally Invasive Radical Hysterectomy for Early-Stage Cervical Cancer.

Cancers (Basel)

August 2023

Gynecologic Oncology Department, Tel Aviv Sourasky Medical Center, Sackler School of Medicine, Tel Aviv University, Tel Aviv 6423906, Israel.

We aimed to investigate the cost-effectiveness of open surgery, compared to minimally invasive radical hysterectomy for early-stage cervical cancer, using updated survival data. Costs and utilities of each surgical approach were compared using a Markovian decision analysis model. Survival data stratified by surgical approach and surgery costs were received from recently published data.

View Article and Find Full Text PDF

Energy Harvesting and Task-Aware Multi-Robot Task Allocation in Robotic Wireless Sensor Networks.

Sensors (Basel)

March 2023

Department of Computer Engineering, Bahcesehir University, 34349 Istanbul, Turkey.

In this work, we investigate an energy-aware multi-robot task-allocation (MRTA) problem in a cluster of the robot network that consists of a base station and several clusters of energy-harvesting (EH) robots. It is assumed that there are M+1 robots in the cluster and tasks exist in each round. In the cluster, a robot is elected as the cluster head, which assigns one task to each robot in that round.

View Article and Find Full Text PDF

Reinforcement Learning (RL) can be considered as a sequence modeling task, where an agent employs a sequence of past state-action-reward experiences to predict a sequence of future actions. In this work, we propose State-Action-Reward Transformer (StARformer), a Transformer architecture for robot learning with image inputs, which explicitly models short-term state-action-reward representations (StAR-representations), essentially introducing a Markovian-like inductive bias to improve long-term modeling. StARformer first extracts StAR-representations using self-attending patches of image states, action, and reward tokens within a short temporal window.

View Article and Find Full Text PDF

Hierarchical planning with state abstractions for temporal task specifications.

Auton Robots

June 2022

Department of Computer Science, Brown University, 115 Waterman Street, Providence, RI 02912 USA.

We often specify tasks for a robot using temporal language that can include different levels of abstraction. For example, the command contains spatial abstraction, given that "floor" consists of individual rooms that can also be referred to in isolation ("kitchen", for example). There is also a temporal ordering of events, defined by the word "before".

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!