A Cooperative Multi-Agent Framework for Zero-Shot Named Entity Recognition

Open Access
Authors
Publication date 2025
Book title WWW '25 : Proceedings of the ACM Web Conference 2025
Book subtitle April 28-May 2, 2025, Sydney, NSW, Australia
ISBN (electronic)
  • 9798400712746
Event 34th ACM Web Conference, WWW 2025
Pages (from-to) 4183-4195
Number of pages 13
Publisher New York, NY: Association for Computing Machinery
Organisations
  • Faculty of Science (FNWI) - Informatics Institute (IVI)
Abstract

Zero-shot named entity recognition (NER) aims to develop entity recognition systems from unannotated text corpora. This task presents substantial challenges due to minimal human intervention. Recent work has adapted large language models (LLMs) for zero-shot NER by crafting specialized prompt templates. And it advances models’ self-learning abilities by incorporating self-annotated demonstrations. Two important challenges persist: (i) Correlations between contexts surrounding entities are overlooked, leading to wrong type predictions or entity omissions. (ii) The indiscriminate use of task demonstrations, retrieved through shallow similarity-based strategies, severely misleads LLMs during inference. In this paper, we introduce the cooperative multi-agent system (CMAS), a novel framework for zero-shot NER that uses the collective intelligence of multiple agents to address the challenges outlined above. CMAS has four main agents: (i) a self-annotator, (ii) a type-related feature (TRF) extractor, (iii) a demonstration discriminator, and (iv) an overall predictor. To explicitly capture correlations between contexts surrounding entities, CMAS reformulates NER into two subtasks: recognizing named entities and identifying entity type-related features within the target sentence. To enable controllable utilization of demonstrations, a demonstration discriminator is established to incorporate the self-reflection mechanism, automatically evaluating helpfulness scores for the target sentence. Experimental results show that CMAS significantly improves zero-shot NER performance across six benchmarks, including both domain-specific and general-domain scenarios. Furthermore, CMAS demonstrates its effectiveness in few-shot settings and with various LLM backbones.

Document type Conference contribution
Language English
Published at https://doi.org/10.1145/3696410.3714923
Other links https://github.com/WZH-NLP/WWW25-CMAS https://www.scopus.com/pages/publications/105005153348
Downloads
3696410.3714923 (Final published version)
Permalink to this page
Back