Skip to content

ICCV2019: Controllable Video Captioning with POS Sequence Guidance Based on Gated Fusion Network

Notifications You must be signed in to change notification settings

singhkavinder/Controllable_XGating

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

30 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Controllable Video Captioning with POS Sequence Guidance Based on Gated Fusion Network

Introduction

In this paper, we propose to guide the video caption generation with POS information, based on a gated fusion of multiple representations of input videos. We construct a novel gated fusion network, with one cross-gating (CG) block, to effectively encode and fuse different types of representations, e.g., the motion and content features. One POS sequence generator relies on this fused representation to predict the global syntactic structure, which is thereafter leveraged to guide the video captioning generation and control the syntax of the generated sentence. This code is a Pytorch implement of this work. image

Dependencies

  • Python 2.7
  • Pytorch 0.3.1.post3
  • Cuda 8.0
  • Cudnn 7.0.5

Prepare

  1. Download Inception_ResNet_V2 features of MSRVTT-10K RGB frames and I3D features of MSRVTT-10K optical flows, and put them in datas folder.
  2. Download pre-trained models, and put them in results folder.
  3. Download the automatic evaluation metrics -- coco-caption, and link it to caption_src/ as well as pos_src/.
ln -s coco-caption caption_src/coco-caption
ln -s coco-caption pos_src/coco-caption
  1. Finally, the document structure of the root path should be like this: image

Evaluation

We provide the pre-trained models of "Ours(IR+M)" and "Ours_RL(IR+M)" in paper to reproduce the result reported in paper. Users can change the command in evaluation.sh to reproduce "Ours(IR+M)" or "Ours_RL(IR+M)".

Metrics Ours(IR_M) Ours_RL(IR+M)
BLEU@1 0.7875 0.8175
BLEU@2 0.6601 0.6788
BLEU@3 0.5339 0.5376
BLEU@4 0.4194 0.4128
METEOR 0.2819 0.2869
ROUGE-L 0.6161 0.6210
CIDEr 0.4866 0.5337
cd caption_src/
sh evaluation.sh

Training

Actually, training in this repository is divided into two steps:

  1. Train a global pos generator and extract the global postag features.
cd pos_src/
sh run_train.sh

After early stopping, extract and store the postag features in pos_src/globalpos_features/xxx.hdf5, where xxx.hdf5 can be customized at line36 of pos_src/eval_utils.py

sh run_extract_pos.sh

Rember to copy the postag features hdf5 into datas/.

  1. Train the caption model.
cd caption_src/
sh run_train.sh

Citation

If you use our code in your research or wish to refer to the baseline results, please use the use the following BibTeX entry.

@article{wang2019controllable,
  title={Controllable Video Captioning with POS Sequence Guidance Based on Gated Fusion Network},
  author={Wang, Bairui and Ma, Lin and Zhang, Wei and Jiang, Wenhao and Wang, Jingwen and Liu, Wei},
  journal={arXiv preprint arXiv:1908.10072},
  year={2019}
}

Acknowledge

Special thanks to Ruotian Luo, as our codes about Self-critical Sequence Training was inspired by and references to his repository.

About

ICCV2019: Controllable Video Captioning with POS Sequence Guidance Based on Gated Fusion Network

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 98.9%
  • Shell 1.1%