This is the pytorch implementation of our paper accepted to ECCV'22: Video Graph Transformer for Video Question Answering
- Release feature of TGIF-QA + MSRVTT-QA [temporally access].
Assume you have installed Anaconda, please do the following to setup the envs:
>conda create -n videoqa python==3.8.8
>conda activate videoqa
>git clone https://github.com/sail-sg/VGT.git
>pip install -r requirements.txt
Please create a data folder outside this repo, so you have two folders in your workspace 'workspace/data/' and 'workspace/VGT/'.
Below we use NExT-QA as an example to get you farmiliar with the code.
Please download the related video feature and QA annotations according to the links provided in the Results and Resources
section. Extract QA annotations into workspace/data/datasets/nextqa/
, video features into workspace/data/feats/nextqa/
and checkpoint files into workspace/data/save_models/nextqa/
.
./shell/next_test.sh 0
python eval_next.py --folder VGT --mode val
Table 1. VideoQA Accuracy (%).
Cross-Modal Pretrain | NExT-QA | TGIF-QA (Action) | TGIF-QA (Trans) | TGIF-QA (FrameQA) | TGIF-QA-R* (Action) | TGIF-QA-R* (Trans) | MSRVTT-QA |
---|---|---|---|---|---|---|---|
- | 53.7 | 95.0 | 97.6 | 61.6 | 59.9 | 70.5 | 39.7 |
WebVid0.18M | 55.7 | - | - | - | 60.5 | 71.5 | - |
- | feats | feats | feats | feats | feats | feats | feats |
- | train&val+test | videos | videos | videos | videos | videos | videos |
- | Q&A | Q&A | Q&A | Q&A | Q&A | Q&A | Q&A |
We have provided all the scripts in the folder 'shells', you can start your training by specifying the GPU IDs behind the script. (If you have multiple GPUs, you can separate them with comma: ./shell/nextqa_train.sh 0,1)
./shell/nextqa_train.sh 0
It will train the model and save to the folder 'save_models/nextqa/'
@article{xiao2022video,
title={Video Graph Transformer for Video Question Answering},
author={Xiao, Junbin and Zhou, Pan and Chua, Tat-Seng and Yan, Shuicheng},
journal={arXiv preprint arXiv:2207.05342},
year={2022}
}
Some code are token from VQA-T. Thanks the auhtors for their great work and code.
If you use any resources (feature & code & models) from this repo, please kindly cite our paper and acknowledge the source.
This repository is released under the Apache 2.0 license as found in the LICENSE file.