Skip to content

Source code of EMNLP 2022 Findings paper "SparseAdapter: An Easy Approach for Improving the Parameter-Efficiency of Adapters"

Notifications You must be signed in to change notification settings

Shwai-He/SparseAdapter

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

9 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

SparseAdapter: An Easy Approach for Improving the Parameter-Efficiency of Adapters

This is the official implementation of the paper:

SparseAdapter: An Easy Approach for Improving the Parameter-Efficiency of Adapters
Shwai He, Liang Ding, Daize Dong, Miao Zhang, Dacheng Tao
EMNLP 2022 Findings. 

Adapter Tuning, which freezes the pretrained language models (PLMs) and only fine-tunes a few extra modules, becomes an appealing efficient alternative to the full model fine-tuning. Although computationally efficient, the recent Adapters often increase parameters (e.g. bottleneck dimension) for matching the performance of full model fine-tuning, which we argue goes against their original intention. In this work, we re-examine the parameter-efficiency of Adapters through the lens of network pruning (we name such plug-in concept as \texttt{SparseAdapter}) and find that SparseAdapter can achieve comparable or better performance than standard Adapters when the sparse ratio reaches up to 80%. Based on our findings, we introduce an easy but effective setting ``\textit{Large-Sparse}'' to improve the model capacity of Adapters under the same parameter budget. Experiments on five competitive Adapters upon three advanced PLMs show that with proper sparse method (e.g. SNIP) and ratio (e.g. 40%) SparseAdapter can consistently outperform their corresponding counterpart. Encouragingly, with the \textit{Large-Sparse} setting, we can obtain further appealing gains, even outperforming the full fine-tuning by a large margin.

Requirements

  • torch==1.13.1
  • transformers==4.17.0
  • tokenizers==0.10.1
  • nltk==3.5

To install requirements, run pip install -r requirements.txt.

Usage

To fine-tune the SparseAdapter model, run:

examples/pytorch/text-classification/run_glue_sparse.py,
examples/pytorch/summarization/run_summarization_sparse.py.
examples/pytorch/question-answering/run_qa_sparse.py,

You can also run the following scripts:

examples/pytorch/text-classification/run_glue.sh,
examples/pytorch/summarization/run_summarization.sh.
examples/pytorch/question-answering/run_qa.sh,

Citation

@inproceedings{he2022sparseadapter,
    title = "SparseAdapter: An Easy Approach for Improving the Parameter-Efficiency of Adapters",
    author = {He, Shwai and Ding, Liang and Dong, Daize and Zhang, Miao and Tao, Dacheng},
    booktitle = "Findings of EMNLP",
    year = "2022",
    url = "https://aclanthology.org/2022.findings-emnlp.160",
}

About

Source code of EMNLP 2022 Findings paper "SparseAdapter: An Easy Approach for Improving the Parameter-Efficiency of Adapters"

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages