Do Transformer Attention Heads Provide Transparency in Abstractive Summarization?

Open Access
Authors
Publication date 12-07-2019
Book title Proceedings of FACTS-IR 2019
Event Workshop on Fairness, Accountability, Confidentiality, Transparency, and Safety in Information Retrieval attached to SIGIR 2019
Number of pages 7
Publisher Ithaca, NY: ArXiv
Organisations
  • Faculty of Science (FNWI) - Informatics Institute (IVI)
Abstract
Learning algorithms become more powerful, often at the cost of increased complexity. In response, the demand for algorithms to be transparent is growing. In NLP tasks, attention distributions learned by attention-based deep learning models are used to gain insights in the models' behavior. To which extent is this perspective valid for all NLP tasks? We investigate whether distributions calculated by different attention heads in a transformer architecture can be used to improve transparency in the task of abstractive summarization. To this end, we present both a qualitative and quantitative analysis to investigate the behavior of the attention heads. We show that some attention heads indeed specialize towards syntactically and semantically distinct input. We propose an approach to evaluate to which extent the Transformer model relies on specifically learned attention distributions. We also discuss what this implies for using attention distributions as a means of transparency.
Document type Conference contribution
Language English
Published at https://arxiv.org/abs/1907.00570
Downloads
1907.00570 (Accepted author manuscript)
Permalink to this page
Back