Editing Factual Knowledge in Language Models

Open Access
Authors
Publication date 2021
Host editors
  • M.-C. Moens
  • X. Huang
  • L. Specia
  • S.W. Sih
Book title 2021 Conference on Empirical Methods in Natural Language Processing
Book subtitle EMNLP 2021 : proceedings of the conference : November 7-11, 2021
ISBN (electronic)
  • 9781955917094
Event 2021 Conference on Empirical Methods in Natural Language Processing, EMNLP 2021
Pages (from-to) 6491-6506
Number of pages 16
Publisher Stroudsburg, PA: The Association for Computational Linguistics
Organisations
  • Interfacultary Research - Institute for Logic, Language and Computation (ILLC)
Abstract

The factual knowledge acquired during pre-training and stored in the parameters of Language Models (LMs) can be useful in downstream tasks (e.g., question answering or textual inference). However, some facts can be incorrectly induced or become obsolete over time. We present KNOWLEDGEEDITOR, a method which can be used to edit this knowledge and, thus, fix 'bugs' or unexpected predictions without the need for expensive retraining or fine-tuning. Besides being computationally efficient, KNOWLEDGEEDITOR does not require any modifications in LM pre-training (e.g., the use of meta-learning). In our approach, we train a hyper-network with constrained optimization to modify a fact without affecting the rest of the knowledge; the trained hyper-network is then used to predict the weight update at test time. We show KNOWLEDGEEDITOR's efficacy with two popular architectures and knowledge-intensive tasks: i) a BERT model fine-tuned for fact-checking, and ii) a sequence-to-sequence BART model for question answering. With our method, changing a prediction on the specific wording of a query tends to result in a consistent change in predictions also for its paraphrases. We show that this can be further encouraged by exploiting (e.g., automatically-generated) paraphrases during training. Interestingly, our hyper-network can be regarded as a 'probe' revealing which components need to be changed to manipulate factual knowledge; our analysis shows that the updates tend to be concentrated on a small subset of components.

Document type Conference contribution
Language English
Published at https://doi.org/10.18653/v1/2021.emnlp-main.522
Other links https://github.com/nicola-decao/KnowledgeEditor https://www.scopus.com/pages/publications/85117789284
Downloads
2021.emnlp-main.522 (Final published version)
Permalink to this page
Back