Neural vector spaces for unsupervised information retrieval

Open Access
Authors
Publication date 10-2018
Journal ACM Transactions on Information Systems
Article number 38
Volume | Issue number 36 | 4
Number of pages 25
Organisations
  • Faculty of Science (FNWI)
  • Faculty of Science (FNWI) - Informatics Institute (IVI)
Abstract

We propose the Neural Vector Space Model (NVSM), a method that learns representations of documents in an unsupervised manner for news article retrieval. In the NVSM paradigm, we learn low-dimensional representations of words and documents from scratch using gradient descent and rank documents according to their similarity with query representations that are composed from word representations. We show that NVSM performs better at document ranking than existing latent semantic vector space methods. The addition of NVSM to a mixture of lexical language models and a state-of-the-art baseline vector space model yields a statistically significant increase in retrieval effectiveness. Consequently, NVSM adds a complementary relevance signal. Next to semantic matching, we find that NVSM performs well in cases where lexical matching is needed. NVSM learns a notion of term specificity directly from the document collection without feature engineering. We also show that NVSM learns regularities related to Luhn significance. Finally, we give advice on how to deploy NVSM in situations where model selection (e.g., cross-validation) is infeasible. We find that an unsupervised ensemble of multiple models trained with different hyperparameter values performs better than a single cross-validated model. Therefore, NVSM can safely be used for ranking documents without supervised relevance judgments.

Document type Article
Language English
Published at https://doi.org/10.1145/3196826
Other links https://www.scopus.com/pages/publications/85054039864
Downloads
Permalink to this page
Back