PyTorch implementation of -- Convolutional Image Captioning
Clone the repository with the --recursive flag to recursively clone third party submodules. For example,
git clone --recursive https://github.com/aditya12agd5/convcap.git
For setup first install PyTorch-0.2.0_3. For this code we used cuda-8.0, python-2.7 and pip
pip install http://download.pytorch.org/whl/cu80/torch-0.2.0.post3-cp27-cp27mu-manylinux1_x86_64.whl
torchvision-0.1.9 was installed from source
Install other python packages using
pip install -r requirements.txt
A wordlist is provided in ./data/wordlist.p
Fetch the train/val/test splits (same as NeuralTalk) for MSCOCO with
bash scripts/fetch_splits.sh
Download train2014, val2014 images and their annotations from the MSCOCO webpage and put them in ./data/coco
To train the model on MSCOCO from scratch,
python main.py model_dir
model_dir is the directory to save model & results. Run python main.py -h for details about other command line arguments. Two models will be saved, model.pth at the end of every epoch and bestmodel.pth, the model that obtains best score (on CIDEr metric by default) over all epochs.
To train the model without attention use the --no-attention flag,
python main.py --no-attention model_dir
To test on MSCOCO with the released model,
python main.py -t 0 model_dir
model_dir should contain the released model bestmodel.pth. Run, scripts/fetch_trained_model.sh, it will store the trained bestmodel.pth in ./data/
To caption your own images,
python captionme.py model_dir image_dir
model_dir should contain the released model bestmodel.pth. Captions for *png, *jpg images in image_dir will be saved in image_dir/captions.txt. Run, python captionme.py -h for additional options
If you use this code, please cite
@inproceedings{AnejaConvImgCap17,
author = {Jyoti Aneja and Aditya Deshpande and Alexander Schwing},
title = {Convolutional Image Captioning},
booktitle={Computer Vision and Pattern Recognition},
url={https://arxiv.org/abs/1711.09151},
year={2018}
}
The scores on MSCOCO test split (http://cs.stanford.edu/people/karpathy/deepimagesent/) for the trained model released with this code are,
Beam Size | BLEU-1 | BLEU-2 | BLEU-3 | BLEU-4 | METEOR | ROUGE | CIDEr |
---|---|---|---|---|---|---|---|
1 | .710 | .538 | .394 | .286 | .243 | .521 | .902 |
3 | .721 | .551 | .413 | .310 | .248 | .529 | .946 |
The scores on MSCOCO test set (40775 images) for captioning challenge (http://cocodataset.org/#captions-eval) for the trained model released with this code are,
BLEU-1 | BLEU-2 | BLEU-3 | BLEU-4 | METEOR | ROUGE | CIDEr | |
---|---|---|---|---|---|---|---|
c5 | .716 | .545 | .406 | .303 | .245 | .525 | .906 |
c40 | .895 | .803 | .691 | .579 | .331 | .673 | .914 |