Skip to content

Image Captioning using Adaptive Attention with PyTorch

Notifications You must be signed in to change notification settings

MicaTeo/knowingWhereToLook

 
 

Repository files navigation

PyTorch Implementation of Knowing When to Look: Adaptive Attention via a Visual Sentinal for Image Captioning Paper

Original Torch Implementation by Lu. et al can be found here

Dataset

I'm using the Flickr30k Dataset. You may download the images from here. If you wish to use the COCO Dataset, you will need to comment out 2 lines in the code.
I'm also using Karpathy's Train/Val/Test Split. You may download it from here.
You may also use the WORMAP.json file in the directory if you don't wish to create it again.

Files

preprocess.py Creates the WORDMAP.json file and the .h5 files
dataset.py Creates the custom dataset
util.py Functions to be used throught the code
models.py Defines the architectures
train_eval For Training and Evaluation
visualization.ipynb For Testing and Visualization

Testing

It's very simple! Place the test image in your directory, and name it as test.jpg, and then run the visualization.ipynbjupyter notebook file to get the results.

Results

The results of some validation and testing images of the Flickr30k from Karpathy's Split is shown below.

final1 final2

References

Thanks to @https://github.com/sgrvinod/a-PyTorch-Tutorial-to-Image-Captioning

About

Image Captioning using Adaptive Attention with PyTorch

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Jupyter Notebook 96.0%
  • Python 3.6%
  • Shell 0.4%