Effects of position bias on click-based recommender evaluation

Authors
Publication date 2014
Host editors
  • M. de Rijke
  • T. Kenter
  • A.P. de Vries
  • C.X. Zhai
  • F. de Jong
  • K. Radinsky
  • K. Hofmann
Book title Advances in Information Retrieval
Book subtitle 36th European Conference on IR Research, ECIR 2014, Amsterdam, The Netherlands, April 13-16, 2014: proceedings
ISBN
  • 9783319060279
ISBN (electronic)
  • 9783319060286
Series Lecture Notes in Computer Science
Event 36th European Conference on Information Retrieval (ECIR '14)
Pages (from-to) 624-630
Publisher Cham: Springer
Organisations
  • Faculty of Science (FNWI) - Informatics Institute (IVI)
Abstract
Measuring the quality of recommendations produced by a recommender system (RS) is challenging. Labels used for evaluation are typically obtained from users of a RS, by asking for explicit feedback, or inferring labels from implicit feedback. Both approaches can introduce significant biases in the evaluation process. We investigate biases that may affect labels inferred from implicit feedback. Implicit feedback is easy to collect but can be prone to biases, such as position bias. We examine this bias using click models, and show how bias following these models would affect the outcomes of RS evaluation. We find that evaluation based on implicit and explicit feedback can agree well, but only when the evaluation metrics are designed to take user behavior and preferences into account, stressing the importance of understanding user behavior in deployed RSs.
Document type Conference contribution
Language English
Published at https://doi.org/10.1007/978-3-319-06028-6_67
Permalink to this page
Back