Skip to content

the implementation of EMNLP 2020 "Learning to Contrast the Counterfactual Samples for Robust Visual Question Answering"

Notifications You must be signed in to change notification settings

jokieleung/CL-VQA

Repository files navigation

Learning to Contrast the Counterfactual Samples for Robust Visual Question Answering

The source code for our paper Learning to Contrast the Counterfactual Samples for Robust Visual Question Answering published in EMNLP 2020. This repo contains code modified from CSS-VQA, Many thanks for their efforts.

Prerequisites

Make sure you are on a machine with a NVIDIA GPU and Python 2.7 with about 100 GB disk space.
h5py==2.10.0
pytorch==1.1.0
Click==7.0
numpy==1.16.5
tqdm==4.35.0

Data Setup

All data preprocess and set up please refer to bottom-up-attention-vqa

  1. Please run the script to download the data.
bash tools/download.sh
  1. Please click the link HERE to download the rest of the data, which is kindly shared by CSS-VQA.

Training

All the args for running our code is preset in the main.py.

Run

CUDA_VISIBLE_DEVICES=0 python main.py

to train a model

Testing

Run

CUDA_VISIBLE_DEVICES=0 python eval.py --dataset [] --debias [] --model_state []

to eval a model

Citation

If you find this paper helps your research, please kindly consider citing our paper in your publications.

@inproceedings{liang2020learning,
  title={Learning to Contrast the Counterfactual Samples for Robust Visual Question Answering},
  author={Liang, Zujie and Jiang, Weitao and Hu, Haifeng and Zhu, Jiaying},
  booktitle={Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)},
  year={2020}
}

About

the implementation of EMNLP 2020 "Learning to Contrast the Counterfactual Samples for Robust Visual Question Answering"

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages