Modeling Label Ambiguity for List-Wise Neural Learning to Rank

Open Access
Authors
Publication date 2017
Book title Neu-IR: Workshop on Neural Information Retrieval
Book subtitle accepted papers
Event SIGIR 2017 Workshop on Neural Information Retrieval (Neu-IR'17)
Number of pages 4
Publisher Ithaca, NY: ArXiv
Organisations
  • Faculty of Science (FNWI) - Informatics Institute (IVI)
Abstract
List-wise learning to rank methods are considered to be the state-of-the-art. One of the major problems with these methods is that the ambiguous nature of relevance labels in learning to rank data is ignored. Ambiguity of relevance labels refers to the phenomenon that multiple documents may be assigned the same relevance label for a given query, so that no preference order should be learned for those documents. In this paper we propose a novel sampling technique for computing a list-wise loss that can take into account this ambiguity. We show the effectiveness of the proposed method by training a 3-layer deep neural network. We compare our new loss function to two strong baselines: ListNet and ListMLE. We show that our method generalizes better and significantly outperforms other methods on the validation and test sets.
Document type Conference contribution
Note Workshop at SIGIR 2017. All accepted papers published on arXiv.org.
Language English
Published at https://arxiv.org/abs/1707.07493
Other links https://neu-ir.weebly.com/
Downloads
1707.07493 (Accepted author manuscript)
Permalink to this page
Back