Skip to content
/ mvp Public

Official Repository for the paper "Model-tuning Via Prompts Makes NLP Models More Robust"

Notifications You must be signed in to change notification settings

acmi-lab/mvp

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Model-tuning Via Prompts Makes NLP Models Adversarially Robust

This is the PyTorch implementation of the MVP paper. This paper uses the textattack library

Model-tuning Via Prompts Makes NLP Models Adversarially Robust

Mrigank Raman*, Pratyush Maini*, Zico Kolter, Zachary Lipton, Danish Pruthi

Setup

This repository requires Python 3.8+ and Pytorch 1.11+ but we recommend using Python 3.10 and installing the following libararies

conda create -n MVP python=3.10
pip install torch==1.12.1+cu116 --extra-index-url https://download.pytorch.org/whl/cu116
pip install torch-scatter -f https://data.pyg.org/whl/torch-1.12.0+cu116.html
pip install textattack[tensorflow]

Training and Testing

In the following you can replace mvp after roberta-base with any one of (projectcls, lpft, lpft_dense, clsprompt, mlp_ft) to run the corresponding model

Training without Adversarial Augmentation :

CUDA_VISIBLE_DEVICES=2 bash scripts/train_1_seed.sh 8 boolq roberta-base mvp 20 1e-5 max mean max mean configs/templates_boolq.yaml configs/verbalizer_boolq.yaml mvp_seed_0 textfooler train -1 1 0.1

Training with Adversarial Augmentation :

CUDA_VISIBLE_DEVICES=2,3 bash scripts/train_adv_1_seed.sh 8 boolq roberta-base mvp 20 1e-5 max mean max mean configs/templates_boolq.yaml configs/verbalizer_boolq.yaml mvp_adv textfooler train -1 1 0.1 1 l2 1

Testing:

CUDA_VISIBLE_DEVICES=2 bash scripts/test_1_seed.sh 8 boolq roberta-base mvp 20 1e-5 max mean max mean configs/templates_boolq.yaml configs/verbalizer_boolq.yaml mvp_seed_0 textfooler train -1 1 0.1

Model Checkpoints

Model Name BoolQ
MLP-FT mlp-ft.zip
MVP mvp.zip
MVP+Adv mvp-adv.zip
ProjectCLS projectcls.zip
CLSPrompt clsprompt.zip
LPFT lpft.zip

To download the checkpoints just run the following command

bash downloader.sh $FILENAME $FILEID

$FILEID can be found from the corresponding links and $FILENAME is the name with which you want to save the file.

Citation

Please concider citing our paper if you use MVP in your work:

@inproceedings{raman2023mvp,
	title={Model-tuning Via Prompts Makes NLP Models Adversarially Robust}, 
	author={Mrigank Raman and Pratyush Maini and J. Zico Kolter and Zachary Chase Lipton and Danish Pruthi},
	journal={arXiv preprint arXiv:2303.07320},
  year={2023}
}

About

Official Repository for the paper "Model-tuning Via Prompts Makes NLP Models More Robust"

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published