automatic video description generation with GPU training
Switch branches/tags
Nothing to show
Clone or download
Latest commit 5719d3f Jan 7, 2017
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
test add some debug samples for testing metrics.py Mar 28, 2016
.gitignore Initial commit Sep 20, 2015
LICENSE.txt initial commit Sep 20, 2015
README.md Update README.md Nov 30, 2016
cocoeval.py initial commit Sep 20, 2015
common.py better Sep 20, 2015
config.py better Sep 20, 2015
data_engine.py initial commit Sep 20, 2015
metrics.py better Sep 20, 2015
model_attention.py Update model_attention.py Nov 30, 2016
reference.bib Update reference.bib Oct 3, 2015
train_model.py initial commit Sep 20, 2015
youtube_mapping.txt added mapping Jan 6, 2017

README.md

This package contains the accompanying code for the following paper:

PDF

BibTeX

Video

Poster with follow-up works that include

With the default setup in config.py, you will be able to train a model on YouTube2Text, reproducing (in fact better than) the results corresponding to the 3rd row in Table 1 where a global temporal attention model is applied on features extracted by GoogLenet.

Note: due to the fact that video captioning research has gradually converged to using coco-caption as the standard toolbox for evaluation. We intergrate this into this package. In the paper, however, a different tokenization methods was used, and the results from this package is not strictly comparable with the one reported in the paper.

#####Please follow the instructions below to run this package

  1. Dependencies
  2. Theano can be easily installed by following the instructions there. Theano has its own dependencies as well. The simpliest way to install Theano is to install Anaconda. Instead of using Theano coming with Anaconda, we suggest running git clone git://github.com/Theano/Theano.git to get the most recent version of Theano.
  3. coco-caption. Install it by simply adding it into your $PYTHONPATH.
  4. Jobman. After it has been git cloned, please add it into $PYTHONPATH as well.
  5. Download the preprocessed version of Youtube2Text. It is a zip file that contains everything needed to train the model. Unzip it somewhere. By default, unzip will create a folder youtube2text_iccv15 that contains 8 pkl files.

preprocessed YouTube2Text download link

  1. Go to common.py and change the following two line RAB_DATASET_BASE_PATH = '/data/lisatmp3/yaoli/datasets/' and RAB_EXP_PATH = '/data/lisatmp3/yaoli/exp/' according to your specific setup. The first path is the parent dir path containing youtube2text_iccv15 dataset folder. The second path specifies where you would like to save all the experimental results.
  2. Before training the model, we suggest to test data_engine.py by running python data_engine.py without any error.
  3. It is also useful to verify coco-caption evaluation pipeline works properly by running python metrics.py without any error.
  4. Now ready to launch the training
  5. to run on cpu: THEANO_FLAGS=mode=FAST_RUN,device=cpu,floatX=float32 python train_model.py
  6. to run on gpu: THEANO_FLAGS=mode=FAST_RUN,device=gpu,floatX=float32 python train_model.py

#####Notes on running experiments Running train_model.py for the first time takes much longer since Theano needs to compile for the first time lots of things and cache on disk for the future runs. You will probably see some warning messages on stdout. It is safe to ignore all of them. Both model parameters and configurations are saved (the saving path is printed out on stdout, easy to find). The most important thing to monitor is train_valid_test.txt in the exp output folder. It is a big table saving all metrics per validation. Please refer to model_attention.py line 1207 -- 1215 for actual meaning of columns.

#####Bonus In the paper, we never mentioned the use of uni-directional/bi-directional LSTMs to encode video representations. But this is an obvious extension. In fact, there has been some work related to it in several other recent papers following ours. So we provide codes for more sophicated encoders as well.

#####Trouble shooting This is a known problem in COCO evaluation script (their code) where METEOR are computed by creating another subprocess, which does not get killed automatically. As METEOR is called more and more, it eats up mem gradually. To fix the problem, add this line after line https://github.com/tylin/coco-caption/blob/master/pycocoevalcap/meteor/meteor.py#L44 self.meteor_p.kill()

If you have any questions, drop us email at li.yao@umontreal.ca.