Mutual benefits: Combining reinforcement learning with sequential sampling models

Open Access
Authors
Publication date 01-2020
Journal Neuropsychologia
Article number 107261
Volume | Issue number 136
Number of pages 11
Organisations
  • Faculty of Social and Behavioural Sciences (FMG) - Psychology Research Institute (PsyRes)
Abstract

Reinforcement learning models of error-driven learning and sequential-sampling models of decision making have provided significant insight into the neural basis of a variety of cognitive processes. Until recently, model-based cognitive neuroscience research using both frameworks has evolved separately and independently. Recent efforts have illustrated the complementary nature of both modelling traditions and showed how they can be integrated into a unified theoretical framework, explaining trial-by-trial dependencies in choice behavior as well as response time distributions. Here, we review a theoretical background of integrating the two classes of models, and review recent empirical efforts towards this goal. We furthermore argue that the integration of both modelling traditions provides mutual benefits for both fields, and highlight promises of this approach for cognitive modelling and model-based cognitive neuroscience.

Document type Article
Language English
Published at https://doi.org/10.1016/j.neuropsychologia.2019.107261
Downloads
1-s2.0-S0028393219303033-main (Final published version)
Permalink to this page
Back