Model-based Meta Reinforcement Learning using Graph Structured Surrogate Models and Amortized Policy Search

Open Access
Authors
Publication date 2022
Journal Proceedings of Machine Learning Research
Event 39th International Conference on Machine Learning
Volume | Issue number 162
Pages (from-to) 23055-23077
Organisations
  • Faculty of Science (FNWI) - Informatics Institute (IVI)
Abstract
Reinforcement learning is a promising paradigm for solving sequential decision-making problems, but low data efficiency and weak generalization across tasks are bottlenecks in real-world applications. Model-based meta reinforcement learning addresses these issues by learning dynamics and leveraging knowledge from prior experience. In this paper, we take a closer look at this framework and propose a new posterior sampling based approach that consists of a new model to identify task dynamics together with an amortized policy optimization step. We show that our model, called a graph structured surrogate model (GSSM), achieves competitive dynamics prediction performance with lower model complexity. Moreover, our approach in policy search is able to obtain high returns and allows fast execution by avoiding test-time policy gradient updates.
Document type Article
Note International Conference on Machine Learning, 17-23 July 2022, Baltimore, Maryland, USA
Language English
Published at https://proceedings.mlr.press/v162/wang22z.html
Downloads
wang22z (Final published version)
Permalink to this page
Back