Strengthening Structural Inductive Biases by Pre-training to Perform Syntactic Transformations

Open Access
Authors
Publication date 2024
Host editors
  • Y. Al-Onaizan
  • M. Bansal
  • Y.-N. Chen
Book title The 2024 Conference on Empirical Methods in Natural Language Processing : Proceedings of the Conference
Book subtitle EMNLP 2024 : November 12-16, 2024
ISBN (electronic)
  • 9798891761643
Event 2024 Conference on Empirical Methods in Natural Language Processing
Pages (from-to) 11558-11573
Publisher Kerrville, TX: Association for Computational Linguistics
Organisations
  • Interfacultary Research - Institute for Logic, Language and Computation (ILLC)
Abstract
Models need appropriate inductive biases to effectively learn from small amounts of data and generalize systematically outside of the training distribution. While Transformers are highly versatile and powerful, they can still benefit from enhanced structural inductive biases for seq2seq tasks, especially those involving syntactic transformations, such as converting active to passive voice or semantic parsing. In this paper, we propose to strengthen the structural inductive bias of a Transformer by intermediate pre-training to perform synthetically generated syntactic transformations of dependency trees given a description of the transformation. Our experiments confirm that this helps with few-shot learning of syntactic tasks such as chunking, and also improves structural generalization for semantic parsing. Our analysis shows that the intermediate pre-training leads to attention heads that keep track of which syntactic transformation needs to be applied to which token, and that the model can leverage these attention heads on downstream tasks.
Document type Conference contribution
Note With supplementary software
Language English
Published at https://doi.org/10.18653/v1/2024.emnlp-main.645
Downloads
2024.emnlp-main.645 (Final published version)
Supplementary materials
Permalink to this page
Back