Show, Attend and Tell
Update (December 2, 2016) TensorFlow implementation of Show, Attend and Tell: Neural Image Caption Generation with Visual Attention which introduces an attention based image caption generator. The model changes its attention to the relevant part of the image while it generates each word.
Author's theano code: https://github.com/kelvinxu/arctic-captions
Another tensorflow implementation: https://github.com/jazzsaxmafia/show_attend_and_tell.tensorflow
First, clone this repo and pycocoevalcap in same directory.
$ git clone https://github.com/yunjey/show-attend-and-tell-tensorflow.git $ git clone https://github.com/tylin/coco-caption.git
This code is written in Python2.7 and requires TensorFlow 1.2. In addition, you need to install a few more packages to process MSCOCO data set. I have provided a script to download the MSCOCO image dataset and VGGNet19 model. Downloading the data may take several hours depending on the network speed. Run commands below then the images will be downloaded in
image/ directory and VGGNet19 model will be downloaded in
$ cd show-attend-and-tell-tensorflow $ pip install -r requirements.txt $ chmod +x ./download.sh $ ./download.sh
For feeding the image to the VGGNet, you should resize the MSCOCO image dataset to the fixed size of 224x224. Run command below then resized images will be stored in
$ python resize.py
Before training the model, you have to preprocess the MSCOCO caption dataset. To generate caption dataset and image feature vectors, run command below.
$ python prepro.py
Train the model
To train the image captioning model, run command below.
$ python train.py
(optional) Tensorboard visualization
I have provided a tensorboard visualization for real-time debugging.
Open the new terminal, run command below and open
http://localhost:6005/ into your web browser.
$ tensorboard --logdir='./log' --port=6005
Evaluate the model
To generate captions, visualize attention weights and evaluate the model, please see