Learning to Rank in Theory and Practice From Gradient Boosting to Neural Networks and Unbiased Learning

Authors
  • C. Lucchesee
  • F.M. Nardini
  • R.K. Pasumarthi
  • S. Bruch
Publication date 2019
Book title SIGIR '19
Book subtitle proceedings of the 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval : July 21-25, 2019, Paris, France
ISBN (electronic)
  • 9781450361729
Event 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR 2019
Pages (from-to) 1419-1420
Publisher New York, New York: The Association for Computing Machinery
Organisations
  • Faculty of Science (FNWI) - Informatics Institute (IVI)
Abstract
This tutorial aims to weave together diverse strands of modern Learning to Rank (LtR) research, and present them in a unified full-day tutorial. First, we will introduce the fundamentals of LtR, and an overview of its various sub-fields. Then, we will discuss some recent advances in gradient boosting methods such as LambdaMART by focusing on their efficiency/effectiveness trade-offs and optimizations. Subsequently, we will then present TF-Ranking, a new open source TensorFlow package for neural LtR models, and how it can be used for modeling sparse textual features. Finally, we will conclude the tutorial by covering unbiased LtR -- a new research field aiming at learning from biased implicit user feedback. The tutorial will consist of three two-hour sessions, each focusing on one of the topics described above. It will provide a mix of theoretical and hands-on sessions, and should benefit both academics interested in learning more about the current state-of-the-art in LtR, as well as practitioners who want to use LtR techniques in their applications.
Document type Chapter
Language English
Published at https://doi.org/10.1145/3331184.3334824
Permalink to this page
Back