Generalisation First, Memorisation Second? Memorisation Localisation for Natural Language Classification Tasks

Open Access
Authors
Publication date 2024
Host editors
  • L.-W. Ku
  • A. Martins
  • V. Srikumar
Book title The 62nd Annual Meeting of the Association for Computational Linguistics : Findings of the Association for Computational Linguistics: ACL 2024
Book subtitle ACL 2024 : August 11-16, 2024
ISBN (electronic)
  • 9798891760998
Event Findings of the 62nd Annual Meeting of the Association for Computational Linguistics, ACL 2024
Pages (from-to) 14348-14366
Publisher Kerrville, TX: Association for Computational Linguistics
Organisations
  • Interfacultary Research - Institute for Logic, Language and Computation (ILLC)
Abstract
Memorisation is a natural part of learning from real-world data: neural models pick up on atypical input-output combinations and store those training examples in their parameter space. That this happens is well-known, but how and where are questions that remain largely unanswered. Given a multi-layered neural model, where does memorisation occur in the millions of parameters?Related work reports conflicting findings: a dominant hypothesis based on image classification is that lower layers learn generalisable features and that deeper layers specialise and memorise. Work from NLP suggests this does not apply to language models, but has been mainly focused on memorisation of facts.We expand the scope of the localisation question to 12 natural language classification tasks and apply 4 memorisation localisation techniques.Our results indicate that memorisation is a gradual process rather than a localised one, establish that memorisation is task-dependent, and give nuance to the generalisation first, memorisation second hypothesis.
Document type Conference contribution
Language English
Published at https://doi.org/10.18653/v1/2024.findings-acl.852
Downloads
2024.findings-acl.852 (Final published version)
Permalink to this page
Back