Skip to content

Latest commit

 

History

History
69 lines (58 loc) · 1.69 KB

README.md

File metadata and controls

69 lines (58 loc) · 1.69 KB

Show&Tell

English | 简体中文

Mindspore Implementation of "Show and Tell : Neural Image Caption Generation"

Pre-requisites

  • Mindspore=2.0.0
  • Convolutional Neural Networks
  • Long Short Term Memory Cells

Usage

Clone the repo:

git clone https://openi.pcl.ac.cn/Kayxxx/ShowAndTell.git

1. Flickr8k Dataset

  • Prepare Dataset (Flickr8k).
  • Extract and move images to a folder named Images and text to captions.txt.
  • Put the folder containing Images and captions.txt in a folder named flickr8k
-- flickr8k
    |-- Images
      |-- 1000268201_693b08cb0e.jpg
      |-- ......
    |-- captions.txt

2. Training

  • Run the following command :
python train.py

3. Inference

  • Run the following command :
python inference.py --image_path <path_to_image> --model_path <path_to_model>

assets

4. Results

Some of the results obtained are shown below :

 Caption : a man is standing on top of a  mountain gazing at the sunset . 

Bad Case:

 Caption : a young boy is holding a yellow ball. 

References