Parameter-Efficient Fine-Tuning without Introducing New Latency

Open Access
Authors
Publication date 2023
Host editors
  • A. Rogers
  • J. Boyd-Graber
  • N. Okazaki
Book title The 61st Conference of the Association for Computational Linguistics
Book subtitle Proceedings of the Conference : ACL 2023 : July 9-14, 2023
ISBN (electronic)
  • 9781959429722
Event 61st Annual Meeting of the Association for Computational Linguistics
Volume | Issue number 1
Pages (from-to) 4242–4260
Publisher Stroudsburg, PA: Association for Computational Linguistics
Organisations
  • Faculty of Science (FNWI) - Informatics Institute (IVI)
Abstract
Parameter-efficient fine-tuning (PEFT) of pre-trained language models has recently demonstrated remarkable achievements, effectively matching the performance of full fine-tuning while utilizing significantly fewer trainable parameters, and consequently addressing the storage and communication constraints. Nonetheless, various PEFT methods are limited by their inherent characteristics. In the case of sparse fine-tuning, which involves modifying only a small subset of the existing parameters, the selection of fine-tuned parameters is task- and domain-specific, making it unsuitable for federated learning. On the other hand, PEFT methods with adding new parameters typically introduce additional inference latency. In this paper, we demonstrate the feasibility of generating a sparse mask in a task-agnostic manner, wherein all downstream tasks share a common mask. Our approach, which relies solely on the magnitude information of pre-trained parameters, surpasses existing methodologies by a significant margin when evaluated on the GLUE benchmark. Additionally, we introduce a novel adapter technique that directly applies the adapter to pre-trained parameters instead of the hidden representation, thereby achieving identical inference speed to that of full fine-tuning. Through extensive experiments, our proposed method attains a new state-of-the-art outcome in terms of both performance and storage efficiency, storing only 0.03% parameters of full fine-tuning.
Document type Conference contribution
Language English
Published at https://doi.org/10.18653/v1/2023.acl-long.233
Other links https://aclanthology.org/2023.acl-long.233.mp4
Downloads
2023.acl-long.233 (Final published version)
Permalink to this page
Back