- Automatic feature selection for model-based reinforcement learning in factored MDPs
- Eighth International Conference on Machine Learning and Applications (ICMLA 2009), Miami Beach, FL, USA
- Book/source title
- The Eighth International Conference on Machine Learning and Applications
- Book/source subtitle
- proceedings : Miami Beach, Florida : 13-15 December 2009
- Pages (from-to)
- Los Alamitos, CA: IEEE Computer Society
- Document type
- Conference contribution
- Faculty of Science (FNWI)
- Informatics Institute (IVI)
Feature selection is an important challenge in machine learning. Unfortunately, most methods for automating feature selection are designed for supervised learning tasks and are thus either inapplicable or impractical for reinforcement learning. This paper presents a new approach to feature selection specifically designed for the challenges of reinforcement learning. In our method, the agent learns a model, represented as a dynamic Bayesian network, of a factored Markov decision process, deduces a minimal feature set from this network, and efficiently computes a policy on this feature set using dynamic programming methods. Experiments in a stock-trading benchmark task demonstrate that this approach can reliably deduce minimal feature sets and that doing so can substantially improve performance and reduce the computational expense of planning.
- go to publisher's site
If you believe that digital publication of certain material infringes any of your rights or (privacy) interests, please let the Library know, stating your reasons. In case of a legitimate complaint, the Library will make the material inaccessible and/or remove it from the website. Please Ask the Library, or send a letter to: Library of the University of Amsterdam, Secretariat, Singel 425, 1012 WP Amsterdam, The Netherlands. You will be contacted as soon as possible.