Back-Translation Sampling by Targeting Difficult Words in Neural Machine Translation

Open Access
Authors
Publication date 2018
Host editors
  • E. Riloff
  • D. Chiang
  • J. Hockenmaier
  • J. Tsujii
Book title Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing : EMNLP 2018
Book subtitle Brussels, Belgium, Oct. 31-Nov. 4
ISBN (electronic)
  • 9781948087841
Event 2018 Conference on Empirical Methods in Natural Language Processing
Pages (from-to) 436-446
Publisher Stroudsburg, PA: The Association for Computational Linguistics
Organisations
  • Faculty of Science (FNWI) - Informatics Institute (IVI)
Abstract
Neural Machine Translation has achieved state-of-the-art performance for several language pairs using a combination of parallel and synthetic data. Synthetic data is often generated by back-translating sentences randomly sampled from monolingual data using a reverse translation model. While back-translation has been shown to be very effective in many cases, it is not entirely clear why. In this work, we explore different aspects of back-translation, and show that words with high prediction loss during training benefit most from the addition of synthetic data. We introduce several variations of sampling strategies targeting difficult-to-predict words using prediction losses and frequencies of words. In addition, we also target the contexts of difficult words and sample sentences that are similar in context. Experimental results for the WMT news translation task show that our method improves translation quality by up to 1.7 and 1.2 Bleu points over back-translation using random sampling for German-English and English-German, respectively.
Document type Conference contribution
Language English
Published at https://doi.org/10.18653/v1/D18-1040
Downloads
D18-1040 (Final published version)
Permalink to this page
Back