This project is a part of Visual Intelligence workshop conducted by CEVI LAB at KLE Technological University
This project aims at providing aid to visually challenged community by interpreting scenes and captioning images. And additionally, using the captions to generate audio for visual aid. This project uses MS COCO dataset for training and testing.
The model architecture in this project is similar to Show, Attend and Tell: Neural Image Caption Generation with Visual Attention. An attention based model is used which enables us to see the parts of the image that were attended to generate the caption. The features of images are extracted using pretrained Inception V3 model. An encoder-decoder model is trained on the vocabulary from the dataset to generate accurate captions.
The following is a testing image
The model returns a caption for the image from the vocabulary it was trained on. The bounding boxes help us know the parts of the image that were attended.
The code can be directly used on google colab.
OR
Before running it locally on Jupyter, please ensure the installation of following Python libraries:
- Tensorflow
- Matplotlib
- Collections
- Random
- Numpy
- OS
- Time
- JSON
- GTTS (Google Text to Speech)
Image Captioning with Visual Attention by MarkDaoust
- What is an encoder decoder model
- Inception V3 Deep Convolutional Architecture
- Show Attend and Tell : Neural Image Caption Generation with Visual Attention
Please feel free to contribute to this project to better achieve the objective.