Embedding Words as Distributions with a Bayesian Skip-gram Model

Open Access
Authors
Publication date 2018
Host editors
  • E.M. Bender
  • L. Derczynski
  • P. Isabelle
Book title The 27th International Conference on Computational Linguistics
Book subtitle COLING 2018 : proceedings of the conference : August 20-26, 2018, Santa Fe, New Mexico, USA
ISBN (electronic)
  • 9781948087506
Event 27th International Conference on Computational Linguistics
Pages (from-to) 1775-1789
Publisher Association for Computational Linguistics
Organisations
  • Faculty of Science (FNWI)
  • Interfacultary Research - Institute for Logic, Language and Computation (ILLC)
Abstract
We introduce a method for embedding words as probability densities in a low-dimensional space. Rather than assuming that a word embedding is fixed across the entire text collection, as in standard word embedding methods, in our Bayesian model we generate it from a word-specific prior density for each occurrence of a given word. Intuitively, for each word, the prior density encodes the distribution of its potential ‘meanings’. These prior densities are conceptually similar to Gaussian embeddings of ėwcitevilnis2014word. Interestingly, unlike the Gaussian embeddings, we can also obtain context-specific densities: they encode uncertainty about the sense of a word given its context and correspond to the approximate posterior distributions within our model. The context-dependent densities have many potential applications: for example, we show that they can be directly used in the lexical substitution task. We describe an effective estimation method based on the variational autoencoding framework. We demonstrate the effectiveness of our embedding technique on a range of standard benchmarks.
Document type Conference contribution
Language English
Published at https://www.aclweb.org/anthology/C18-1151/
Downloads
C18-1151 (Final published version)
Permalink to this page
Back