TensorFlow (TensorLayer) Implementation of Image Captioning ⚠️(Deprecated)⚠️
Switch branches/tags
Nothing to show
Clone or download
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
data TF 11 Nov 9, 2016
demo [DEMO] webpage demo Feb 28, 2017
tensorlayer update layers.py Feb 4, 2017
tensorlayer1.2.2 TF 11 Nov 9, 2016
.gitignore update layers.py Feb 4, 2017
README.md Update README.md Sep 28, 2017
buildmodel.py update sample_top() in nlp Nov 18, 2016
evaluate.py update folder name Nov 10, 2016
inception_v3(for TF 0.10).py TF 11 Nov 9, 2016
model.py update sample_top() in nlp Nov 18, 2016
run_inference.py [DEMO] webpage demo Feb 28, 2017
run_inference_demo.py [DEMO] webpage demo Feb 28, 2017
train.py train-> n_step+1 Dec 6, 2016

README.md

Image Captioning

We reimplemented the complicated Google' Image Captioning model by simple TensorLayer APIs.

This script run well under Python2 or 3 and TensorFlow 10 or 11.

1. Prepare MSCOCO data and Inception model

Before you run the scripts, you need to follow Google's setup guide, and setup the model, ckpt and data directories in *.py.

  • Creat a data folder.
  • Download and Preprocessing MSCOCO Data click here
  • Download the Inception_V3 CKPT click here

2. Train the model

  • via train.py

3. Evaluate the model

  • via evaluate.py

4. Generate captions by given image and model

  • via run_inference.py

5. Evaluation