Skip to content

vizsumit/image-captioner

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

image-captioner

Image captioning utility scripts for preparing image-caption datasets. Uses LM Studio API with any vision model.

Prerequisites

  1. Python 3.8+
  2. Install LM Studio LM Studio
  3. Download any vision model from LM Studio, i.e. Gemma 4, Qwen 3.5, etc.
  4. In caption.py under payload:model, change your-vision-model to model name you want to use.
  5. Go to LM Studio developer section and enable API. Enable LM Studio API
  6. Default API URL is http://localhost:1234, we are connecting to this API URL in our script.

Step 1: Preparing your image dataset

  1. Put your image dataset in images folder.
  2. Run the enumeration script to automatically rename images to 000.jpg, 001.jpg, 002.jpg, etc.
python enumerate.py

Step 2: Auto caption your images

  1. Run caption.py script to auto caption your images.
python caption.py
  1. All captions will be saved in single file result.txt. Single file because it helps in editing multiple captions at once. I am using VSCode for editing.
  2. You can write your own input prompt in caption.py.
  3. Default input prompt: Write caption for this image under 100 words. Focus on subject and environment. Avoid speculation. Don't be poetic be precise.

Step 3: Finalize your dataset

  1. After you are done editing captions, Run chop.py script. This will split captions into individual files.
python chop.py
  1. All captions will be saved in captions/ folder.
  2. Combine captions and images folder to create final dataset and you are good to go.

If this project saved you time and effort, consider supporting me on Ko-Fi.

About

Image captioning utility scripts for preparing image-caption datasets. Uses LM Studio API with any vision model.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Languages