Search engines that learn from their users

Open Access
Authors
  • A.G. Schuth
Supervisors
Cosupervisors
Award date 27-05-2016
ISBN
  • 9789461826749
Number of pages 179
Organisations
  • Faculty of Science (FNWI) - Informatics Institute (IVI)
Abstract
More than half the world’s population uses web search engines, resulting in over half a billion search queries every single day. For many people web search engines are among the first resources they go to when a question arises. Moreover, search engines have for many become the most trusted route to information, more so even than traditional media such as newspapers, news websites or news channels on television. With this in mind, from an information retrieval (IR) research perspective, two things are important. First, it is important to understand how well search engines (rankers) perform and secondly this knowledge should be used to improve them.
In the first part of this thesis we investigate how user interactions with search engines can be used to evaluate search engines. In particular, we introduce a new online evaluation paradigm called multileaving that extends upon interleaving. With this new method, fewer users need to be exposed to the results from possibly inferior search engines.
In the second part of this thesis we turn to online learning to rank. We learn from the evaluation methods introduced and extended upon in the first part. The important implication is that search engines can adapt more quickly to changes in user preferences.
In the last part we introduce a new shared resource and a new evaluation paradigm. Lerot is an online evaluation framework that allows us to simulate users interacting with a search engine. Secondly we introduce OpenSearch, a new evaluation paradigm involving real users of real search engines.
Document type PhD thesis
Note Research conducted at: Universiteit van Amsterdam Series: SIKS dissertation series 2016-11
Language English
Downloads
Permalink to this page
cover
Back