Skip to content
A Model for Natural Language Attack on Text Classification and Inference
Python
Branch: master
Clone or download
Latest commit 0bce2d5 Sep 4, 2019
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
.idea first commit Sep 4, 2019
BERT first commit Sep 4, 2019
ESIM first commit Sep 4, 2019
InferSent first commit Sep 4, 2019
data first commit Sep 4, 2019
.DS_Store first commit Sep 4, 2019
README.md Update README.md Sep 4, 2019
attack_classification.py first commit Sep 4, 2019
attack_nli.py first commit Sep 4, 2019
comp_cos_sim_mat.py first commit Sep 4, 2019
criteria.py first commit Sep 4, 2019
dataloader.py first commit Sep 4, 2019
modules.py first commit Sep 4, 2019
run_attack_classification.py first commit Sep 4, 2019
run_attack_nli.py first commit Sep 4, 2019
train_classifier.py first commit Sep 4, 2019

README.md

TextFooler

A Model for Natural Language Attack on Text Classification and Inference

This is the source code for the paper: Jin, Di, et al. "Is BERT Really Robust? Natural Language Attack on Text Classification and Entailment." arXiv preprint arXiv:1907.11932 (2019).

Prerequisites:

  • Pytorch >= 0.4
  • Tensorflow >= 1.0
  • Numpy
  • Python >= 3.6

How to use

  • Run the following code to install the esim package:
cd ESIM
python setup.py install
cd ..
python comp_cos_sim_mat.py [PATH_TO_COUNTER_FITTING_WORD_EMBEDDINGS]
  • Run the following code to generate the adversaries for text classification:
python attack_classification.py

For Natural langauge inference:

python attack_nli.py

Examples of run code for these two files are in run_attack_classification.py and run_attack_nli.py. Here we explain each required argument in details:

  • --dataset_path: The path to the dataset. We put the 1000 examples for each dataset we used in the paper in the folder data.
  • --target_model: Name of the target model such as ''bert''.
  • --target_model_path: The path to the trained parameters of the target model. For ease of replication, we shared the trained BERT model parameters on each dataset we used.
  • --counter_fitting_embeddings_path: The path to the counter-fitting word embeddings.
  • --counter_fitting_cos_sim_path: This is optional. If given, then the pre-computed cosine similarity scores based on the counter-fitting word embeddings will be loaded to save time. If not, it will be calculated.
  • --USE_cache_path: The path to save the USE model file (Downloading is automatic if this path is empty).
You can’t perform that action at this time.