Evaluating the Knowledge Base Completion Potential of GPT

Open Access
Authors
Publication date 2023
Host editors
  • H. Bouamor
  • J. Pino
  • K. Bali
Book title The 2023 Conference on Empirical Methods in Natural Language Processing : Findings of the Association for Computational Linguistics: EMNLP 2023
Book subtitle December 6-10, 2023
ISBN (electronic)
  • 9798891760615
Event 2023 Conference on Empirical Methods in Natural Language Processing
Pages (from-to) 6432-6443
Number of pages 12
Publisher Stroudsburg, PA: Association for Computational Linguistics
Organisations
  • Faculty of Science (FNWI) - Informatics Institute (IVI)
Abstract

Structured knowledge bases (KBs) are an asset for search engines and other applications, but are inevitably incomplete. Language models (LMs) have been proposed for unsupervised knowledge base completion (KBC), yet, their ability to do this at scale and with high accuracy remains an open question. Prior experimental studies mostly fall short because they only evaluate on popular subjects, or sample already existing facts from KBs. In this work, we perform a careful evaluation of GPT's potential to complete the largest public KB: Wikidata. We find that, despite their size and capabilities, models like GPT-3, ChatGPT and GPT-4 do not achieve fully convincing results on this task. Nonetheless, they provide solid improvements over earlier approaches with smaller LMs. In particular, we show that, with proper thresholding, GPT-3 enables to extend Wikidata by 27M facts at 90% precision.

Document type Conference contribution
Language English
Published at https://doi.org/10.18653/v1/2023.findings-emnlp.426
Other links https://www.scopus.com/pages/publications/85179550011
Downloads
2023.findings-emnlp.426 (Final published version)
Permalink to this page
Back