Skip to content
master
Go to file
Code

Latest commit

 

Git stats

Files

Permalink
Failed to load latest commit information.
Type
Name
Latest commit message
Commit time
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

README.md

Towards Robustifying NLI Models Against Lexical Dataset Biases

This is the official repo for the following paper

  • Towards Robustifying NLI Models Against Lexical Dataset Biases, Xiang Zhou and Mohit Bansal, ACL 2020 (arxiv)

Dependencies

This code require Python 3.4 and TensorFlow 1.12.0

Datasets

All the datasets (train/eval) can be downloaded at here. For detailed description of the datasets, please check the README in the downloaded file.

Prepare

  1. Download the datasets and put it under the data folder.
  2. Download the GloVe embeddings and put it under the data folder.

Usage

Example scripts for BoW Sub-Model Orthogonality with HEX

  1. First train the baseline BiLSTM model by running
bash scripts/baseline.sh
  1. Train the debiased model by running
bash scripts/hex.sh

The HEX implementation is adapted from https://github.com/HaohanWang/HEX.

Evaluation

The evaluation scripts is at evaluation.py. When running evaluation, first change the TESTING_DATASETS in the file. Then run python evaluation.py scripts/TRAININGSCRIPT. This script will automatically generate and runs the testing scripts with respect to your training script.

More codes, model checkpoints and documentations will come soon.

About

No description, website, or topics provided.

Resources

License

Releases

No releases published

Packages

No packages published
You can’t perform that action at this time.