Attention vs non-attention for a Shapley-based explanation method

Open Access
Authors
Publication date 2021
Host editors
  • E. Agirre
  • M. Apidianaki
  • I. Vulić
Book title Deep Learning Inside Out (DeeLIO): The 2nd Workshop on Knowledge Extraction and Integration for Deep Learning Architectures
Book subtitle proceedings of the workshop : NAACL-HLT 2021 : June 10 2021
ISBN (electronic)
  • 9781954085305
Event 2nd Workshop on Knowledge Extraction and Integration for Deep Learning Architectures
Pages (from-to) 129-139
Publisher Stroudsburg, PA: The Association for Computational Linguistics
Organisations
  • Faculty of Science (FNWI) - Informatics Institute (IVI)
  • Interfacultary Research - Institute for Logic, Language and Computation (ILLC)
Abstract
The field of explainable AI has recently seen an explosion in the number of explanation methods for highly non-linear deep neural networks. The extent to which such methods – that are often proposed and tested in the domain of computer vision – are appropriate to address the explainability challenges in NLP is yet relatively unexplored. In this work, we consider Contextual Decomposition (CD) – a Shapley-based input feature attribution method that has been shown to work well for recurrent NLP models – and we test the extent to which it is useful for models that contain attention operations. To this end, we extend CD to cover the operations necessary for attention-based models. We then compare how long distance subject-verb relationships are processed by models with and without attention, considering a number of different syntactic structures in two different languages: English and Dutch. Our experiments confirm that CD can successfully be applied for attention-based models as well, providing an alternative Shapley-based attribution method for modern neural networks. In particular, using CD, we show that the English and Dutch models demonstrate similar processing behaviour, but that under the hood there are consistent differences between our attention and non-attention models.
Document type Conference contribution
Language English
Published at https://doi.org/10.48550/arXiv.2104.12424 https://doi.org/10.18653/v1/2021.deelio-1.13
Downloads
2021.deelio-1.13 (Final published version)
Permalink to this page
Back