Analyzing Multi-Head Self-Attention: Specialized Heads Do the Heavy Lifting, the Rest Can Be Pruned

Open Access
Authors
  • E. Voita
  • D. Talbot
  • F. Moiseev
  • R. Sennrich
Publication date 2019
Host editors
  • A. Korhonen
  • D. Traum
  • L. Màrquez
Book title The 57th Annual Meeting of the Association for Computational Linguistics
Book subtitle ACL 2019 : proceedings of the conference : July 28-August 2, 2019, Florence, Italy
ISBN (electronic)
  • 9781950737482
Event The 57th Annual Meeting of the Association for Computational Linguistics - ACL 2019
Pages (from-to) 5797–5808
Publisher Stroudsburg, PA: The Association for Computational Linguistics
Organisations
  • Interfacultary Research - Institute for Logic, Language and Computation (ILLC)
Abstract
Multi-head self-attention is a key component of the Transformer, a state-of-the-art architecture for neural machine translation. In this work we evaluate the contribution made by individual attention heads to the overall performance of the model and analyze the roles played by them in the encoder. We find that the most important and confident heads play consistent and often linguistically-interpretable roles. When pruning heads using a method based on stochastic gates and a differentiable relaxation of the L0 penalty, we observe that specialized heads are last to be pruned. Our novel pruning method removes the vast majority of heads without seriously affecting performance. For example, on the English-Russian WMT dataset, pruning 38 out of 48 encoder heads results in a drop of only 0.15 BLEU.
Document type Conference contribution
Language English
Published at https://doi.org/10.18653/v1/P19-1580
Published at https://arxiv.org/abs/1905.09418
Other links https://vimeo.com/385434677
Downloads
P19-1580 (Final published version)
Permalink to this page
Back