A hierarchical Bayesian approach to assess learning and guessing strategies in reinforcement learning
| Authors | |
|---|---|
| Publication date | 12-2019 |
| Journal | Journal of Mathematical Psychology |
| Article number | 102276 |
| Volume | Issue number | 93 |
| Number of pages | 11 |
| Organisations |
|
| Abstract |
In two-armed bandit tasks participants learn which stimulus in a stimulus pair is associated with the highest value. In typical reinforcement learning studies, participants are presented with several pairs in a random order; frequently applied analyses assume each pair is learned in a similar way. When tasks become more difficult, however, participants may learn some stimulus pairs while they fail to learn other pairs, that is, they simply guess for a subset of pairs. We put forward the Reinforcement Learning/Guessing (RLGuess) model — enabling researchers to model this learning and guessing process. We implemented the model in a Bayesian hierarchical framework. Simulations showed that the RLGuess model outperforms a standard reinforcement learning model when participants guess: Fit is enhanced and parameter estimates become unbiased. An empirical application illustrates the merits of the RLGuess model.
|
| Document type | Article |
| Note | With supplementary files. - Codes provided on Open Science Framework. |
| Language | English |
| Published at | https://doi.org/10.1016/j.jmp.2019.102276 |
| Other links | https://osf.io/uk684/ |
| Downloads |
Schaaf_etal_2019_RLGuess
(Final published version)
|
| Supplementary materials | |
| Permalink to this page | |
