Skip to content

aub-mind/Arabic-Image-Captioning

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

9 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Arabic-Image-Captioning

Generate Arabic captions for images using Deep Learning

Paper: here

Dataset: ./data/Flickr8k_text/Flickr8k.arabic.full.txt

Presentation: here

Introduction

Image Captioning refers to the art of describing the content of an image by computers. It is a well-known problem in CV and NLP. It has many applications, such as improved information retrieval, early childhood education, for visually impaired persons, for social media, and so on. Although remarkable work has been accomplished recently for English, and due to the lack of large and publicly available dataset, the progress on Arabic Image Captioning is still lagging. Therefore, we developed our own dataset based on Flickr8K. It can be found under data/Flickr8k_text/ folder, named Flickr8k.arabic.full.txt. It has the exact same format as the original Flickr8K.

Model

Inspired by recent advances in neural machine translation, the sequence-to-sequence encoder-decoder approach was adopted here. seq2seq-image-captioning-arabic For mode details, please check the paper.

Results

Good example: good_examples

Bad example: bad_examples

Results show that developing language specific datasets and end-to-end models outperforms translating English generated captions to a destination language as the later may accumulate both models errors: image captioning and translation. 6

Installation

  1. Clone or download this repository.

  2. Make sure python 3.x is installed on your PC. To check if and which version is installed, run the following command:

python -V

If this results an error, this means that python isn’t installed on your PC! please download and install it from the original website

  1. (optional) it is recommended that you create a python virtual environment and install the necessary libraries in it to prevent versions collisions:
python -m venv AIC

where AIC is the environment name. Once you’ve created a virtual environment, you may activate it.

AIC\Scripts\activate.bat
  1. Install required libraries from the provided file (requirements.txt):
pip install -r requirements.txt

Make sure you provide the correct path of requirements.txt

  1. Open jupyter notebook:
AIC\Scripts\jupyter-notebook.exe

then navigate to and open Arabic Image Captioning.ipynb

Citation

Please cite this paper:

@conference{visapp20,
author={Obeida ElJundi. and Mohamad Dhaybi. and Kotaiba Mokadam. and Hazem Hajj. and Daniel Asmar.},
title={Resources and End-to-End Neural Network Models for Arabic Image Captioning},
booktitle={Proceedings of the 15th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications - Volume 5: VISAPP,},
year={2020},
pages={233-241},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0008881202330241},
isbn={978-989-758-402-2},
}

Contact information

For help, issues, or personal communication related to this work, please contact Obeida ElJundi: oae15@mail.aub.edu

About

Generate Arabic captions for images using Deep Learning

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Jupyter Notebook 99.5%
  • Python 0.5%