The continual generation of behaviors that satisfy all conflicting demands that cannot be satisfied simultaneously, is a situation that is seen naturally in autonomous agents such as long-term operating household robots, and in animals in the natural world. Homeostatic reinforcement learning (homeostatic RL) is known as a bio-inspired framework that achieves such multiobjective control through behavioral optimization. Homeostatic RL achieves autonomous behavior optimization using only internal body information in complex environmental systems, including continuous motor control. However, it is still unknown whether the resulting behaviors actually have the similar long-term properties as real animals. To clarify this issue, this study focuses on the balancing of multiple nutrients in animal foraging as a situation in which such multiobjective control is achieved in animals in the natural world. We then focus on the nutritional geometry framework, which can quantitatively handle the long-term characteristics of foraging strategies for multiple nutrients in nutritional biology, and construct a similar verification environment to show experimentally that homeostatic RL agents exhibit long-term foraging characteristics seen in animals in nature. Furthermore, numerical simulation results show that the long-term foraging characteristics of the agent can be controlled by changing the weighting for the agent's multiobjective motivation. These results show that the long-term behavioral characteristics of homeostatic RL agents that perform behavioral emergence at the motor control level can be predicted and designed based on the internal dynamics of the body and the weighting of motivation, which change in real time.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC11635831 | PMC |
http://dx.doi.org/10.1093/pnasnexus/pgae540 | DOI Listing |
Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!