Skip to content
forked from sail-sg/VGT

Video Graph Transformer for Video Question Answering (ECCV'22)

License

Notifications You must be signed in to change notification settings

SJTU-DL-lab/VGT

 
 

Repository files navigation

This is the pytorch implementation of our paper accepted to ECCV'22: Video Graph Transformer for Video Question Answering

VGT vs VGT without DGT

Todo

  1. Release feature of TGIF-QA + MSRVTT-QA [temporally access].

Environment

Assume you have installed Anaconda, please do the following to setup the envs:

>conda create -n videoqa python==3.8.8
>conda activate videoqa
>git clone https://github.com/sail-sg/VGT.git
>pip install -r requirements.txt

Preparation

Please create a data folder outside this repo, so you have two folders in your workspace 'workspace/data/' and 'workspace/VGT/'.

Below we use NExT-QA as an example to get you farmiliar with the code. Please download the related video feature and QA annotations according to the links provided in the Results and Resources section. Extract QA annotations into workspace/data/datasets/nextqa/, video features into workspace/data/feats/nextqa/ and checkpoint files into workspace/data/save_models/nextqa/.

Inference

./shell/next_test.sh 0

Evaluation

python eval_next.py --folder VGT --mode val

Results and Resources

Table 1. VideoQA Accuracy (%).

Cross-Modal Pretrain NExT-QA TGIF-QA (Action) TGIF-QA (Trans) TGIF-QA (FrameQA) TGIF-QA-R* (Action) TGIF-QA-R* (Trans) MSRVTT-QA
- 53.7 95.0 97.6 61.6 59.9 70.5 39.7
WebVid0.18M 55.7 - - - 60.5 71.5 -
- feats feats feats feats feats feats feats
- train&val+test videos videos videos videos videos videos
- Q&A Q&A Q&A Q&A Q&A Q&A Q&A
(We have merged some files of the same dataset to avoid too many links.)

Train

We have provided all the scripts in the folder 'shells', you can start your training by specifying the GPU IDs behind the script. (If you have multiple GPUs, you can separate them with comma: ./shell/nextqa_train.sh 0,1)

./shell/nextqa_train.sh 0

It will train the model and save to the folder 'save_models/nextqa/'

Result Visualization (NExT-QA)

VGT vs VGT without DGT

Citation

@article{xiao2022video,
	  title={Video Graph Transformer for Video Question Answering},
	  author={Xiao, Junbin and Zhou, Pan and Chua, Tat-Seng and Yan, Shuicheng},
	  journal={arXiv preprint arXiv:2207.05342},
	  year={2022}
}

Acknowledgements

Some code are token from VQA-T. Thanks the auhtors for their great work and code.

Notes

If you use any resources (feature & code & models) from this repo, please kindly cite our paper and acknowledge the source.

License

This repository is released under the Apache 2.0 license as found in the LICENSE file.

About

Video Graph Transformer for Video Question Answering (ECCV'22)

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 96.7%
  • Shell 3.3%