A review of computational models of basic rule learning The neural-symbolic debate and beyond

Open Access
Authors
Publication date 08-2019
Journal Psychonomic Bulletin and Review
Volume | Issue number 26 | 4
Pages (from-to) 1174-1194
Number of pages 21
Organisations
  • Interfacultary Research - Institute for Logic, Language and Computation (ILLC)
Abstract

We present a critical review of computational models of generalization of simple grammar-like rules, such as ABA and ABB. In particular, we focus on models attempting to account for the empirical results of Marcus et al. (Science, 283(5398), 77–80 1999). In that study, evidence is reported of generalization behavior by 7-month-old infants, using an Artificial Language Learning paradigm. The authors fail to replicate this behavior in neural network simulations, and claim that this failure reveals inherent limitations of a whole class of neural networks: those that do not incorporate symbolic operations. A great number of computational models were proposed in follow-up studies, fuelling a heated debate about what is required for a model to generalize. Twenty years later, this debate is still not settled. In this paper, we review a large number of the proposed models. We present a critical analysis of those models, in terms of how they contribute to answer the most relevant questions raised by the experiment. After identifying which aspects require further research, we propose a list of desiderata for advancing our understanding on generalization.

Document type Review article
Language English
Published at https://doi.org/10.3758/s13423-019-01602-z
Other links https://www.scopus.com/pages/publications/85066798583
Downloads
s13423-019-01602-z (Final published version)
Permalink to this page
Back