Hierarchical model-based reinforcement learning (HMBRL) aims to combine the sample efficiency of model-based reinforcement learning with the abstraction capability of hierarchical reinforcement learning. While HMBRL has great potential, the structural and conceptual complexities of current approaches make it challenging to extract general principles, hindering understanding and adaptation to new use cases, and thereby impeding the overall progress of the field. In this work we describe a novel HMBRL framework and evaluate it thoroughly.
View Article and Find Full Text PDF