Modeling long-term nutritional behaviors using deep homeostatic reinforcement learning.

PNAS Nexus

Graduate School of Information Science and Technology, The University of Tokyo, Tokyo 113-8656, Japan.

Published: December 2024

AI Article Synopsis

  • The study examines how homeostatic reinforcement learning (RL) can replicate the behavior of animals in balancing multiple nutrients during foraging, emphasizing the framework's ability to optimize behaviors based on internal body information.
  • It utilizes nutritional geometry to quantitatively analyze these foraging strategies, creating an experimental setup to compare the long-term behavioral attributes of homeostatic RL agents with those of real animals.
  • Results indicate that the long-term foraging behaviors of these RL agents can be adjusted by altering their multiobjective motivations, suggesting that their behavior can be both predicted and designed based on internal body dynamics.

Article Abstract

The continual generation of behaviors that satisfy all conflicting demands that cannot be satisfied simultaneously, is a situation that is seen naturally in autonomous agents such as long-term operating household robots, and in animals in the natural world. Homeostatic reinforcement learning (homeostatic RL) is known as a bio-inspired framework that achieves such multiobjective control through behavioral optimization. Homeostatic RL achieves autonomous behavior optimization using only internal body information in complex environmental systems, including continuous motor control. However, it is still unknown whether the resulting behaviors actually have the similar long-term properties as real animals. To clarify this issue, this study focuses on the balancing of multiple nutrients in animal foraging as a situation in which such multiobjective control is achieved in animals in the natural world. We then focus on the nutritional geometry framework, which can quantitatively handle the long-term characteristics of foraging strategies for multiple nutrients in nutritional biology, and construct a similar verification environment to show experimentally that homeostatic RL agents exhibit long-term foraging characteristics seen in animals in nature. Furthermore, numerical simulation results show that the long-term foraging characteristics of the agent can be controlled by changing the weighting for the agent's multiobjective motivation. These results show that the long-term behavioral characteristics of homeostatic RL agents that perform behavioral emergence at the motor control level can be predicted and designed based on the internal dynamics of the body and the weighting of motivation, which change in real time.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC11635831PMC
http://dx.doi.org/10.1093/pnasnexus/pgae540DOI Listing

Publication Analysis

Top Keywords

homeostatic reinforcement
8
reinforcement learning
8
animals natural
8
multiobjective control
8
motor control
8
multiple nutrients
8
homeostatic agents
8
long-term foraging
8
foraging characteristics
8
homeostatic
6

Similar Publications

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!