Skip to content
No description, website, or topics provided.
OpenEdge ABL Python Shell
Branch: master
Clone or download
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
data Code commit for public repo Nov 27, 2017
scripts Code commit for public repo Nov 27, 2017
third_party
.gitattributes Fix project language Dec 31, 2017
.gitignore Code commit for public repo Nov 27, 2017
.gitmodules
LICENSE
README.md Fix citation in readme Jul 2, 2018
beamsearch.py Code commit for public repo Nov 27, 2017
captionme.py Bug fix, set default attention param for released model in captionme Jan 29, 2018
coco_loader.py
convcap.py
evaluate.py Code commit for public repo Nov 27, 2017
main.py Add flag --no-attention for model without attention Dec 27, 2017
requirements.txt
test.py
test_beam.py
train.py Add flag --no-attention for model without attention Dec 27, 2017
vggfeats.py Code commit for public repo Nov 27, 2017

README.md

ConvCap: Convolutional Image Captioning

PyTorch implementation of -- Convolutional Image Captioning

Clone the repository with the --recursive flag to recursively clone third party submodules. For example,

git clone --recursive https://github.com/aditya12agd5/convcap.git

For setup first install PyTorch-0.2.0_3. For this code we used cuda-8.0, python-2.7 and pip

pip install http://download.pytorch.org/whl/cu80/torch-0.2.0.post3-cp27-cp27mu-manylinux1_x86_64.whl 

torchvision-0.1.9 was installed from source

Install other python packages using

pip install -r requirements.txt

A wordlist is provided in ./data/wordlist.p

Fetch the train/val/test splits (same as NeuralTalk) for MSCOCO with

bash scripts/fetch_splits.sh

Download train2014, val2014 images and their annotations from the MSCOCO webpage and put them in ./data/coco

To train the model on MSCOCO from scratch,

python main.py model_dir

model_dir is the directory to save model & results. Run python main.py -h for details about other command line arguments. Two models will be saved, model.pth at the end of every epoch and bestmodel.pth, the model that obtains best score (on CIDEr metric by default) over all epochs.

To train the model without attention use the --no-attention flag,

python main.py --no-attention model_dir

To test on MSCOCO with the released model,

python main.py -t 0 model_dir

model_dir should contain the released model bestmodel.pth. Run, scripts/fetch_trained_model.sh, it will store the trained bestmodel.pth in ./data/

To caption your own images,

python captionme.py model_dir image_dir

model_dir should contain the released model bestmodel.pth. Captions for *png, *jpg images in image_dir will be saved in image_dir/captions.txt. Run, python captionme.py -h for additional options

If you use this code, please cite

@inproceedings{AnejaConvImgCap17,                                                                  
  author = {Jyoti Aneja and Aditya Deshpande and Alexander Schwing},          
  title = {Convolutional Image Captioning},                                                    
  booktitle={Computer Vision and Pattern Recognition},                                              
  url={https://arxiv.org/abs/1711.09151},                                                           
  year={2018}                                                                                       
}

The scores on MSCOCO test split (http://cs.stanford.edu/people/karpathy/deepimagesent/) for the trained model released with this code are,

Beam Size BLEU-1 BLEU-2 BLEU-3 BLEU-4 METEOR ROUGE CIDEr
1 .710 .538 .394 .286 .243 .521 .902
3 .721 .551 .413 .310 .248 .529 .946

The scores on MSCOCO test set (40775 images) for captioning challenge (http://cocodataset.org/#captions-eval) for the trained model released with this code are,

BLEU-1 BLEU-2 BLEU-3 BLEU-4 METEOR ROUGE CIDEr
c5 .716 .545 .406 .303 .245 .525 .906
c40 .895 .803 .691 .579 .331 .673 .914
You can’t perform that action at this time.