Fast Interleaved Bidirectional Sequence Generation

Open Access
Authors
Publication date 2020
Host editors
  • L. Barrault
  • O. Bojar
  • F. Bougares
  • R. Chatterjee
  • M.R. Costa-Jussà
  • C. Federmann
  • M. Fishel
  • A. Fraser
  • Y. Graham
  • P. Guzman
  • B. Haddow
  • M. Huck
  • A. Jimeno Yepes
  • P. Koehn
  • A. Martins
  • M. Morishita
  • C. Monz
  • M. Nagata
  • T. Nakazawa
  • M. Negri
Book title Fifth Conference on Machine Translation : proceedings of the conference
Book subtitle EMNLP : November 19-20, 2020, online
ISBN (electronic)
  • 9781948087810
Event 5th Conference on Machine Translation, WMT 2020
Pages (from-to) 503-518
Number of pages 16
Publisher Stroudsburg, PA: Association for Computational Linguistics
Organisations
  • Interfacultary Research - Institute for Logic, Language and Computation (ILLC)
Abstract

Independence assumptions during sequence generation can speed up inference, but parallel generation of highly inter-dependent tokens comes at a cost in quality. Instead of assuming independence between neighbouring tokens (semi-autoregressive decoding, SA), we take inspiration from bidirectional sequence generation and introduce a decoder that generates target words from the left-to-right and right-to-left directions simultaneously. We show that we can easily convert a standard architecture for unidirectional decoding into a bidirectional decoder by simply interleaving the two directions and adapting the word positions and self-attention masks. Our interleaved bidirectional decoder (IBDecoder) retains the model simplicity and training efficiency of the standard Transformer, and on five machine translation tasks and two document summarization tasks, achieves a decoding speedup of ∼2× compared to autoregressive decoding with comparable quality. Notably, it outperforms left-to-right SA because the independence assumptions in IBDecoder are more felicitous. To achieve even higher speedups, we explore hybrid models where we either simultaneously predict multiple neighbouring tokens per direction, or perform multi-directional decoding by partitioning the target sequence. These methods achieve speedups to 4×-11× across different tasks at the cost of <1 BLEU or <0.5 ROUGE (on average).

Document type Conference contribution
Language English
Published at https://aclanthology.org/2020.wmt-1.62
Other links https://github.com/bzhangGo/zero https://slideslive.com/38939588 https://www.scopus.com/pages/publications/85123855093
Downloads
2020.wmt-1.62 (Final published version)
Permalink to this page
Back