Designing reward functions that fully align with human intent is often challenging. Preference-based Reinforcement Learning (PbRL) provides a framework where humans can select preferred segments through pairwise comparisons of behavior trajectory segments, facilitating reward function learning. However, existing methods collect non-dynamic preferences and struggle to provide accurate information about preference intensity. We propose scaling preference (SP) feedback method and qualitative and quantitative scaling preference (Q2SP) feedback method, which allow humans to express the true degree of preference between trajectories, thus helping reward learn more accurate human preferences from offline data. Our key insight is that more detailed feedback facilitates the learning of reward functions that better align with human intent. Experiments demonstrate that, across a range of control and robotic benchmark tasks, our methods are highly competitive compared to baselines and state of the art approaches.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1016/j.neunet.2024.106848 | DOI Listing |
Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!