Addressing function approximation error in actor-critic methods
| Authors |
|
|---|---|
| Publication date | 2018 |
| Journal | Proceedings of Machine Learning Research |
| Event | 35th International Conference on Machine Learning |
| Volume | Issue number | 80 |
| Pages (from-to) | 1587-1596 |
| Number of pages | 10 |
| Organisations |
|
| Abstract |
In value-based reinforcement learning methods such as deep Q-learning, function approximation errors are known to lead to overestimated value estimates and suboptimal policies. We show that this problem persists in an actor-critic setting and propose novel mechanisms to minimize its effects on both the actor and the critic. Our algorithm builds on Double Q-learning, by taking the minimum value between a pair of critics to limit overestimation. We draw the connection between target networks and overestimation bias, and suggest delaying policy updates to reduce per-update error and further improve performance. We evaluate our method on the suite of OpenAI gym tasks, outperforming the state of the art in every environment tested.
|
| Document type | Article |
| Note | With supplementary file. - International Conference on Machine Learning, 10-15 July 2018, Stockholmsmässan, Stockholm Sweden. - In print proceedings pp. 2587-2601. |
| Language | English |
| Published at | http://proceedings.mlr.press/v80/fujimoto18a.html |
| Other links | http://www.proceedings.com/40527.html |
| Downloads |
fujimoto18a
(Final published version)
|
| Supplementary materials | |
| Permalink to this page | |
