The Role of Complex NLP in Transformers for Text Ranking?

Open Access
Authors
Publication date 2022
Book title ICTIR'22
Book subtitle proceedings of the 2022 ACM SIGIR International Conference on Theory of Information Retrieval : July 11-12, 2022, Madrid, Spain
ISBN (electronic)
  • 9781450394123
Event 8th ACM SIGIR International Conference on the Theory of Information Retrieval, ICTIR 2022
Pages (from-to) 153-160
Number of pages 8
Publisher New York, NY: The Association for Computing Machinery
Organisations
  • Interfacultary Research - Institute for Logic, Language and Computation (ILLC)
Abstract

Even though term-based methods such as BM25 provide strong baselines in ranking, under certain conditions they are dominated by large pre-trained masked language models (MLMs) such as BERT. To date, the source of their effectiveness remains unclear. Is it their ability to truly understand the meaning through modeling syntactic aspects? We answer this by manipulating the input order and position information in a way that destroys the natural sequence order of query and passage and shows that the model still achieves comparable performance. Overall, our results highlight that syntactic aspects do not play a critical role in the effectiveness of re-ranking with BERT. We point to other mechanisms such as query-passage cross-attention and richer embeddings that capture word meanings based on aggregated context regardless of the word order for being the main attributions for its superior performance.

Document type Conference contribution
Language English
Published at https://doi.org/10.1145/3539813.3545144
Other links https://www.scopus.com/pages/publications/85138381708
Downloads
3539813.3545144 (Final published version)
Permalink to this page
Back