Investigating LLM Variability in Personalized Conversational Information Retrieval
| Authors |
|
|---|---|
| Publication date | 2025 |
| Book title | SIGIR-AP 2025 |
| Book subtitle | Proceedings of the 2025 Annual International ACM SIGIR Conference on Research and Development in Information Retrieval in the Asia Pacific Region : December 7-10, 2025, Xi'an, China |
| ISBN (electronic) |
|
| Event | 3rd International ACM SIGIR Conference on Research and Development in Information Retrieval in the Asia Pacific Region, SIGIR-AP 2025 |
| Pages (from-to) | 353-363 |
| Number of pages | 11 |
| Publisher | New York, NY: Association for Computing Machinery |
| Organisations |
|
| Abstract |
Personalized Conversational Information Retrieval (CIR) has seen rapid progress in recent years, driven by the development of Large Language Models (LLMs). Personalized CIR aims to enhance document retrieval by leveraging user-specific information, such as preferences, knowledge, or constraints, to tailor responses to individual needs. A key resource developed for this task is the TREC iKAT 2023 dataset, designed to evaluate the integration of personalization into CIR pipelines. Building on this resource, Mo et al. explored several strategies for incorporating Personal Textual Knowledge Bases (PTKB) into LLM-based query reformulation. Their findings suggested that personalization from PTKB could be detrimental and that human annotations were often noisy. However, these conclusions were based on single-run experiments using the commercial GPT-3.5 Turbo model, raising concerns about output variability and repeatability. In this reproducibility study, we rigorously reproduce and extend their work, with a focus on LLM output variability and model generalization. We apply the original methods to the newly released TREC iKAT 2024 dataset, and evaluate a diverse range of models, including Llama (1B to 70B), Qwen-7B, and closed-source models like GPT-3.5 and GPT-4o-mini. Our results show that human-selected PTKBs consistently enhance retrieval performance, while LLM-based selection methods do not reliably outperform manual choices. We further compare variance across datasets and observe substantially higher variability on iKAT than on CAsT, highlighting the challenges of evaluating personalized CIR. Notably, recall-oriented metrics exhibit lower variance than precision-oriented ones, a critical insight for first-stage retrievers, not addressed in the original study. Finally, we underscore the need for multi-run evaluations and variance reporting when assessing LLM-based CIR systems, especially in dense and sparse retrieval or in-context learning settings. By broadening the scope of evaluation across models, datasets, and metrics, our study contributes to more robust and generalizable practices for personalized CIR. |
| Document type | Conference contribution |
| Language | English |
| Published at | https://doi.org/10.1145/3767695.3769502 |
| Other links | https://www.scopus.com/pages/publications/105026255689 |
| Downloads |
3767695.3769502
(Final published version)
|
| Permalink to this page | |
