Skip to content

Youho99/LabelForge

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

LabelForge

LabelForge is an application for pre-generating labels for computer vision tasks.

Currently supported labels:

  • BoundingBox (Grounding Dino)
  • Mask Segmentation (Grounded SAM)

To launch LabelForge :

Installing dependencies : pip install -r requirements.txt

Run LabelForge with script : ./labelforge.sh

Run LabelForge with StreamLit command : streamlit run LabelForge.py

Documentation

Step 1 : Images Selection

On the Images Selection page, you can upload the images to be labeled.

These images are generally a small representative sample of your overall image dataset.

Once the images are uploaded, they are saved in cache. So there is no need to validate anything.

However, if you return to the Images Selection page, the cache will be deleted.

Only png images are supported.

Warning: do not load 2 images with the same name, one of them will be overwritten.

Step 2 : Model Parameters

On the Model Parameters page, you need to choose a model, depending on the task you want to accomplish.

You must also configure the classes you want to detect. Here you can add as much class as needed.

A class is made up of the name of the class (what will be displayed), as well as the class prompt (the words or phrase in natural language that the model will interpret to automatically label your images).

Once the model and classes are configured, they are saved in cache. So there is no need to validate anything.

Step 3 : Labels Generation

On the Labels Generation page, you can generate the labels of your classes on your previously uploaded images.

Once the labels have been generated, you can adjust the confidence thresholds for each class, in order to find the best value for your class.

It is recommended to test several prompts for the same class, in order to obtain better results.

Step 4 : Auto Labeling Dataset

On the Auto Labeling Dataset page, you can configure your global image dataset, in order to do the auto labeling process on your entire dataset.

You can readjust the confidence thresholds of your classes if necessary.

You can modify the template to use if necessary.

You can choose the dataset format you want as output:

  • Standard format - Image folders and labels generated.
  • CVAT format - Folder architecture so that it can be dragged & dropped on the CVAT interface (upload of images and annotations).
    • For Grounding Dino model, use YOLO v1.1 mode on CVAT, and drag & drop the zip file to CVAT.
    • For Grounded SAM model, use COCO v1.0 mode on CVAT, and drag & drop the json file to CVAT.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published