Generalized domains for empirical evaluations in reinforcement learning
| Authors |
|
|---|---|
| Publication date | 2009 |
| Book title | Proceedings of the 4th workshop on Evaluation Methods for Machine Learning at ICML-09, Montreal, Canada |
| Event | The 4th workshop on Evaluation Methods for Machine Learning, Montreal, Canada |
| Organisations |
|
| Abstract |
Many empirical results in reinforcement learning are based on a very small set of environments. These results often represent the best algorithm parameters that were found after an ad-hoc tuning or fitting process. We argue that presenting tuned scores from a small set of environments leads to method overfitting, wherein results may not generalize to similar environments. To address this problem, we advocate empirical evaluations using generalized domains: parameterized problem generators that explicitly encode variations in the environment to which the learner should be robust. We argue that evaluating across a set of these generated problems offers a more meaningful evaluation of reinforcement learning algorithms.
|
| Document type | Conference contribution |
| Published at | http://www.site.uottawa.ca/ICML09WS/papers/w8.pdf |
| Permalink to this page | |