Computational modelling of Artificial Language Learning Retention, Recognition & Recurrence
| Authors | |
|---|---|
| Supervisors |
|
| Cosupervisors | |
| Award date | 29-11-2017 |
| ISBN |
|
| Number of pages | 138 |
| Organisations |
|
| Abstract |
Artificial Language Learning (ALL) is a key paradigm to study the nature of learning mechanisms in language. In this dissertation, I have used computational modelling to interpret results from ALL experiments on infants, adults and non-human animals, with the goal of understanding the mechanisms of language learning. I have conceptualized the process as consisting of three steps: (i) memorization of sequence segments, (ii) computing the propensity to generalize, and (iii) generalization. For step (i) I have proposed R&R, a processing model that explains segmentation as a result of retention and recognition. This model can account for a range of empirical results on humans and rats (Peña et al., 2002; Toro and Trobalón, 2005; Frank et al., 2010).Identifying (ii) as a separate step is a contribution from this dissertation. I propose to account for this step with the use of Simple Good Turing (Good, 1953), a smoothing model used to account for unseen words in corpora.As for step (iii), I present a critical review of the existing models, in order to identify the state of the art and the unresolved issues. I then propose a neural network model to account for the results of one influential experiment (Marcus et al., 1999) with two core ideas: pre-wiring the connections of the network and pre-training the model to account for previous experience.The dissertation also discusses methodological issues in computational modelling, including Marr´s levels of analysis (Marr, 1982) and common pitfalls of evaluation procedures, and it explores alternative evaluation methods.
|
| Document type | PhD thesis |
| Note | Originally co-supervised by R.J.H. Scha. ILLC dissertation series DS-2017-08 |
| Language | English |
| Downloads | |
| Permalink to this page | |
