Skip to content

Latest commit

 

History

History
44 lines (34 loc) · 1.31 KB

README.md

File metadata and controls

44 lines (34 loc) · 1.31 KB

VQA-VS and NLVR2

Dataset Preparation

The VQA-VS data can be downloaded from GoogleDrive. To create the arrow files, we follow ViLT and METER. See this link for details.

Fine-tuning on Downstream Tasks

  • Download the FIBER pre-trained model from here.

VQA-VS

python run.py with \
  seed=3 \
  task_inter_and_intra_multimodal_vqa \
  data_root=./datasets/vqa/arrow_files/ \
  log_dir=./inter_and_intra_modality/ \
  num_gpus=4 \
  per_gpu_batchsize=16 \
  max_epoch=100 \
  learning_rate=1e-4 \
  load_path=fiber_pretrain.ckpt

NLVR2

python run.py with \
  seed=3 \
  task_inter_and_intra_multimodal_nlvr2 \
  data_root=./datasets/vqa/arrow_files/ \
  log_dir=./inter_and_intra_modality/ \
  num_gpus=4 \
  per_gpu_batchsize=16 \
  max_epoch=100 \
  learning_rate=1e-4 \
  load_path=fiber_pretrain.ckpt

Acknowledgements

The code is based on FIBER.

We thank the authors for their amazing work and releasing the code base.