Feature Interactions Reveal Linguistic Structure in Language Models

Open Access
Authors
Publication date 2023
Host editors
  • A. Rogers
  • J. Boyd-Graber
  • N. Okazaki
Book title Findings of the Association for Computational Linguistics: ACL 2023
Book subtitle July 9-14, 2023
ISBN (electronic)
  • 9781959429623
Event 61st Annual Meeting of the Association for Computational Linguistics
Pages (from-to) 8697–8712
Publisher Stroudsburg, PA: Association for Computational Linguistics
Organisations
  • Interfacultary Research - Institute for Logic, Language and Computation (ILLC)
Abstract
We study feature interactions in the context of feature attribution methods for post-hoc interpretability. In interpretability research, getting to grips with feature interactions is increasingly recognised as an important challenge, because interacting features are key to the success of neural networks. Feature interactions allow a model to build up hierarchical representations for its input, and might provide an ideal starting point for the investigation into linguistic structure in language models. However, uncovering the exact role that these interactions play is also difficult, and a diverse range of interaction attribution methods has been proposed. In this paper, we focus on the question which of these methods most faithfully reflects the inner workings of the target models. We work out a grey box methodology, in which we train models to perfection on a formal language classification task, using PCFGs. We show that under specific configurations, some methods are indeed able to uncover the grammatical rules acquired by a model. Based on these findings we extend our evaluation to a case study on language models, providing novel insights into the linguistic structure that these models have acquired.
Document type Conference contribution
Language English
Published at https://doi.org/10.18653/v1/2023.findings-acl.554
Downloads
2023.findings-acl.554 (Final published version)
Permalink to this page
Back