Switching between representations in reinforcement learning

Authors
Publication date 2010
Host editors
  • R. Babuška
  • F.C.A. Groen
Book title Interactive collaborative information systems
ISBN
  • 9783642116872
Series Studies in computational intelligence, 281
Pages (from-to) 65-84
Number of pages 585
Publisher Berlin: Springer
Organisations
  • Faculty of Science (FNWI) - Informatics Institute (IVI)
Abstract
This chapter presents and evaluates an online representation selection method for factored Markov decision processes (MDPs). The method addresses a special case of the feature selection problem that only considers certain subsets of features, which we call candidate representations. A motivation for the method is that it can potentially deal with problems where other structure learning algorithms are infeasible due to a large degree of the associated dynamic Bayesian network. Our method uses switch actions to select a representation and uses off-policy updating to improve the policy of representations that were not selected. We demonstrate the validity of the method by showing for a contextual bandit task and a regular MDP that given a feature set containing only a single relevant feature, we can find this feature very efficiently using the switch method. We also show for a contextual bandit task that switching between a set of relevant features and a subset of these features can outperform each of the individual representations. The reason for this is that the switch method combines the fast performance increase of the small representation with the high asymptotic performance of the large representation.
Document type Chapter
Published at https://doi.org/10.1007/978-3-642-11688-9_3
Permalink to this page
Back