Under the Hood: Using Diagnostic Classifiers to Investigate and Improve how Language Models Track Agreement Information

Open Access
Authors
Publication date 2018
Host editors
  • T. Linzen
  • G. Chrupała
  • A. Alishahi
Book title The 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP
Book subtitle EMNLP 2018 : proceedings of the First Workshop : November 1, 2018, Brussels, Belgium
ISBN (electronic)
  • 9781948087711
Event 2018 EMNLP Workshop BlackboxNLP
Pages (from-to) 240–248
Publisher Stroudsburg, PA: The Association for Computational Linguistics
Organisations
  • Interfacultary Research - Institute for Logic, Language and Computation (ILLC)
  • Faculty of Science (FNWI)
Abstract
ow do neural language models keep track of number agreement between subject and verb? We show that ‘diagnostic classifiers’, trained to predict number from the internal states of the language model, provide a detailed understanding of how, when, and where this information is represented. Moreover, they give us insight in when and where this information is corrupted in cases where the language model ends up making agreement errors. To demonstrate the causal role that the representations we find play, we then use this information to influence the course of the LSTM during the processing of difficult sentences. Results from such an intervention show a large increase in the language model’s accuracy. Together, these results show that diagnostic classifiers give us an unrivalled detailed look into the representation of linguistic information in neural models, and moreover demonstrate that this knowledge can be use to improve their performance.
Document type Conference contribution
Language English
Published at https://doi.org/10.18653/v1/W18-5426
Published at https://arxiv.org/abs/1808.08079
Downloads
W18-5426 (Final published version)
Permalink to this page
Back