Skip to content

LivNLP/prompt-robustness

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

10 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Evaluating the Robustness of Discrete Prompts

Yoichi Ishibashi, Danushka Bollegala, Katsuhito Sudoh, Satoshi Nakamura: Evaluating the Robustness of Discrete Prompts (EACL 2023)

Setup

Install the required packages.

pip install -r requirements.txt

Usage

Our experiment is divided into two phases (1) prompt learning (2) analyzing the robustness of the learned prompts.

  1. Learning prompt tokens by AutoPrompt (AP).
cd ap
sh ap_label-token-search.sh
sh ap_trigger-token-search.sh
  1. Fine-tuning PLM by Manually-written Prompts (MP).
cd mp
sh mp_finetuning.sh
  1. Evaluating the robustness of LM prompt The following scripts perform the four robustness evaluations of LM prompts.

AutoPrompt (AP)

cd ap
sh ap_run-all-robust-eval.sh 

Manually-written Prompts (MP)

cd mp
sh mp_run-all-robust-eval.sh 

The adversarial NLI dataset

We created the adversarial NLI dataset (see Sec 3.5 Adversarial Perturbations in our paper). These datasets were used for the prompt robustness evaluation described above.

data/superglue/cb/perturbation-label-change.tsv
data/superglue/cb/perturbation-label-no-change.tsv
data/superglue/mnli/perturbation-label-change.tsv
data/superglue/mnli/perturbation-label-no-change.tsv

External Libraries

Citation

@inproceedings{Ishibashi:EACL:2023,
  author = {Yoichi Ishibashi and Danushka Bollegala and Katsuhito Sudoh and Satoshi Nakamura},  
  title = {Evaluating the Robustness of Discrete Prompts},
  booktitle = {Proc. of  the 17th Conference of the European Chapter of the Association for Computational Linguistics (EACL 2023)},
  year = {2023}
}

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published