Bayes factors for reinforcement-learning models of the Iowa Gambling Task

Authors
Publication date 2016
Journal Decision
Volume | Issue number 3 | 2
Pages (from-to) 115-131
Organisations
  • Faculty of Social and Behavioural Sciences (FMG) - Psychology Research Institute (PsyRes)
Abstract
The psychological processes that underlie performance on the Iowa gambling task (IGT) are often isolated with the help of reinforcement-learning (RL) models. The most popular method to compare RL models is the BIC post hoc fit criterion—a criterion that considers goodness-of-fit relative to model complexity. However, the current implementation of the BIC post hoc fit criterion considers only 1 dimension of model complexity, that is, the number of free parameters. A more sophisticated implementation of the BIC post hoc fit criterion, 1 that provides a coherent and complete discounting of complexity, is provided by the Bayes factor. Here we demonstrate an analysis in which Bayes factors are obtained with a Monte Carlo method, known as importance sampling, to compare 4 RL models of the IGT: the Expectancy Valence (EV), Prospect Valence Learning (PVL), PVL-Delta, and Value-Plus-Perseveration (VPP) models. We illustrate the method using a data pool of 771 participants from 11 different studies. Our results provide strong evidence for the VPP model and moderate evidence for the PVL model, but little evidence for the EV and PVL-Delta models—results that were not in line with a BIC post hoc fit analysis. We discuss how our results may be combined with results obtained from other model comparison studies to obtain a balanced and comprehensive assessment of model adequacy. (PsycINFO Database Record (c) 2016 APA, all rights reserved)
Document type Article
Language English
Published at https://doi.org/10.1037/dec0000040
Permalink to this page
Back