English Intermediate-Task Training Improves Zero-Shot Cross-Lingual Transfer Too
| Authors |
|
|---|---|
| Publication date | 2020 |
| Host editors |
|
| Book title | The 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing |
| Book subtitle | AACL-IJCNLP 2020 : proceedings of the conference : December 4-7, 2020 |
| ISBN (electronic) |
|
| Event | 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and 10th International Joint Conference on Natural Language Processing |
| Pages (from-to) | 557–575 |
| Publisher | Stroudsburg, PA: The Association for Computational Linguistics |
| Organisations |
|
| Abstract |
Intermediate-task training—fine-tuning a pretrained model on an intermediate task before fine-tuning again on the target task—often improves model performance substantially on language understanding tasks in monolingual English settings. We investigate whether English intermediate-task training is still helpful on non-English target tasks. Using nine intermediate language-understanding tasks, we evaluate intermediate-task transfer in a zero-shot cross-lingual setting on the XTREME benchmark. We see large improvements from intermediate training on the BUCC and Tatoeba sentence retrieval tasks and moderate improvements on question-answering target tasks. MNLI, SQuAD and HellaSwag achieve the best overall results as intermediate tasks, while multi-task intermediate offers small additional improvements. Using our best intermediate-task models for each target task, we obtain a 5.4 point improvement over XLM-R Large on the XTREME benchmark, setting the state of the art as of June 2020. We also investigate continuing multilingual MLM during intermediate-task training and using machine-translated intermediate-task data, but neither consistently outperforms simply performing English intermediate-task training.
|
| Document type | Conference contribution |
| Language | English |
| Published at | https://www.aclweb.org/anthology/2020.aacl-main.56/ |
| Downloads |
2020.aacl-main.56
(Final published version)
|
| Permalink to this page | |