Overview of the INEX 2009 ad hoc track

Authors
  • S. Geva
  • J. Kamps ORCID logo
  • M. Lehtonen
  • R. Schenkel
  • J.A. Thom
  • A. Trotman
Publication date 2010
Host editors
  • S. Geva
  • J. Kamps
  • A. Trotman
Book title Focused Retrieval and Evaluation
Book subtitle 8th International Workshop of the Initiative for the Evaluation of XML Retrieval, INEX 2009, Brisbane, Australia, December 7-9, 2009 : revised and selected papers
ISBN
  • 9783642145551
ISBN (electronic)
  • 9783642145568
Series Lecture Notes in Computer Science
Event 8th International Workshop of the Initiative for the Evaluation of XML Retrieval, INEX 2009
Pages (from-to) 4-25
Publisher Berlin: Springer
Organisations
  • Interfacultary Research - Institute for Logic, Language and Computation (ILLC)
Abstract
This paper gives an overview of the INEX 2009 Ad Hoc Track. The main goals of the Ad Hoc Track were three-fold. The first goal was to investigate the impact of the collection scale and markup, by using a new collection that is again based on a the Wikipedia but is over 4 times larger, with longer articles and additional semantic annotations. For this reason the Ad Hoc track tasks stayed unchanged, and the Thorough Task of INEX 2002–2006 returns. The second goal was to study the impact of more verbose queries on retrieval effectiveness, by using the available markup as structural constraints—now using both the Wikipedia’s layout-based markup, as well as the enriched semantic markup—and by the use of phrases. The third goal was to compare different result granularities by allowing systems to retrieve XML elements, ranges of XML elements, or arbitrary passages of text. This investigates the value of the internal document structure (as provided by the XML mark-up) for retrieving relevant information. The INEX 2009 Ad Hoc Track featured four tasks: For the Thorough Task a ranked-list of results (elements or passages) by estimated relevance was needed. For the Focused Task a ranked-list of non-overlapping results (elements or passages) was needed. For the Relevant in Context Task non-overlapping results (elements or passages) were returned grouped by the article from which they came. For the Best in Context Task a single starting point (element start tag or passage start) for each article was needed. We discuss the setup of the track, and the results for the four tasks.
Document type Conference contribution
Note With erratum
Language English
Published at https://doi.org/10.1007/978-3-642-14556-8_4
Other links http://dx.doi.org/10.1007/978-3-642-14556-8_46
Permalink to this page
Back