How Active is Active Learning: Value Function Method Versus an Approximation Method

Open Access
Authors
Publication date 10-2020
Journal Computational Economics
Volume | Issue number 56 | 3
Pages (from-to) 675-693
Number of pages 19
Organisations
  • Faculty of Economics and Business (FEB) - Amsterdam School of Economics Research Institute (ASE-RI)
  • Faculty of Economics and Business (FEB)
Abstract
In a previous paper Amman et al. (Macroecon Dyn, 2018) compare the two dominant approaches for solving models with optimal experimentation (also called active learning), i.e. the value function and the approximation method. By using the same model and dataset as in Beck and Wieland (J Econ Dyn Control 26:1359–1377, 2002), they find that the approximation method produces solutions close to those generated by the value function approach and identify some elements of the model specifications which affect the difference between the two solutions. They conclude that differences are small when the effects of learning are limited. However the dataset used in the experiment describes a situation where the controller is dealing with a nonstationary process and there is no penalty on the control. The goal of this paper is to see if their
conclusions hold in the more commonly studied case of a controller facing a stationary
process and a positive penalty on the control.
Document type Article
Note In special issue: Experimentation in Economics
Language English
Related publication How active is active learning: value function method vs an approximation method
Published at https://doi.org/10.1007/s10614-020-09968-2
Downloads
Permalink to this page
Back