Deep Generative Model for Joint Alignment and Word Representation

Open Access
Authors
Publication date 2018
Host editors
  • M. Walker
  • H. Ji
  • A. Stent
Book title NAACL-HLT 2018 : The 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
Book subtitle proceedings of the conference : June 1-June 6, 2018, New Orleans, Louisiana
ISBN (electronic)
  • 9781948087278
Event 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
Volume | Issue number 1
Pages (from-to) 1011-1023
Publisher Stroudsburg, PA: The Association for Computational Linguistics
Organisations
  • Interfacultary Research - Institute for Logic, Language and Computation (ILLC)
Abstract
This work exploits translation data as a source of semantically relevant learning signal for models of word representation. In particular, we exploit equivalence through translation as a form of distributional context and jointly learn how to embed and align with a deep generative model. Our EMBEDALIGN model embeds words in their complete observed context and learns by marginalisation of latent lexical alignments. Besides, it embeds words as posterior probability densities, rather than point estimates, which allows us to compare words in context using a measure of overlap between distributions (e.g. KL divergence). We investigate our model’s performance on a range of lexical semantics tasks achieving competitive results on several standard benchmarks including natural language inference, paraphrasing, and text similarity.
Document type Conference contribution
Language English
Published at https://doi.org/10.18653/v1/N18-1092
Other links http://vimeo.com/277669962
Downloads
N18-1092 (Final published version)
Permalink to this page
Back