This is a repository for an NLU course project by Fangjun Zhang, Nimi Wang and Ruoyu Zhu. We aim at incorporating dependency information into natural language inference and fact verification task. Our model has comparable result to the state-of-the-art methods. We use FEVER dataset for our training data and FEVER score as our evaluation metric. You can find our report at here
We implement our model based on several packages, including Jack the Reader, FEVER Baseline and additionally UCL MR data downloading script.
Please also install StanfordNLP for dependency parsing preprocessing.
Run the following script would start the whole pipeline for training and validation.
python pipeline.py --config configs/pytorch_depsa_esim_n5.json --overwrite --model [model_name]
Furthermore, if you are using Slrum system on HPC, you can run the sbatch_*.sh
script by
sbatch sbatch_*.sh
We also provide a simple script for visualizing the self-attention weight, dependency mask and dependency-enhanced self-attention weight.
python jack/visualize.py