Skip to content

samschulter/omnilabeltools

main
Switch branches/tags

Name already in use

A tag already exists with the provided branch name. Many Git commands accept both tag and branch names, so creating this branch may cause unexpected behavior. Are you sure you want to create this branch?
Code

Latest commit

 

Git stats

Files

Permalink
Failed to load latest commit information.
Type
Name
Latest commit message
Commit time
 
 
 
 
 
 
 
 
 
 
 
 
 
 

OmniLabelTools

A Python toolkit for the OmniLabel benchmark (https://www.omnilabel.org)

OmniLabel benchmark banner

Main features:

  • evaluation of prediction results
  • visualization of ground truth and predictions
  • extract basic statistics of the dataset annotations

Install | Dataset setup | Annotation format | Evaluate your results | License

Install

Install OmniLabelTools as:

git clone https://www.github.com/samschulter/omnilabeltools
cd omnilabeltools
pip install .

You can also install in developer mode:

pip install -e .

Dataset setup

Please visit https://www.omnilabel.org/dataset/download for download and setup instructions. To verify the dataset setup, you can run the following two scripts to print some basic dataset statistics and visualize some examples:

olstats --path-to-json path/to/dataset/gt/json

olvis --path-to-json path/to/dataset/gt/json --path-to-imgs path/to/image/directories --path-output some/directory/to/store/visualizations

Annotation format

In general, we try to follow the MS COCO dataset format as much as possible, with all annotations stored in one json file. Please see https://www.omnilabel.org/dataset/download and https://www.omnilabel.org/task for more details.

Ground truth data

{
    images: [
        {
            id              ... unique image ID
            file_name       ... path to image, relative to a given base directory (see above)
        },
        ...
    ],
    descriptions: [
        {
            id              ... unique description ID
            text            ... the text of the object description
            image_ids       ... list of image IDs for which this description is part of the label space
            anno_info       ... some metadata about the description
        },
        ...
    ],
    annotations: [        # Only for val sets. Not given in test set annotations!
        {
            id              ... unique annotation ID
            image_id        ... the image id this annotation belongs to
            bbox            ... the bounding box coordinates of the object (x,y,w,h)
            description_ids ... list of description IDs that refer to this object
	    },
        ...
    ]
}

Submitting prediction results

NB: The test server is not online at this time. Once online, prediction results are submitted in the following format:

[
    {
        image_id        ... the image id this predicted box belongs to
        bbox            ... the bounding box coordinates of the object (x,y,w,h)
        description_ids ... list of description IDs that refer to this object
        scores          ... list of confidences, one for each description
    },
    ...
]

Evaluate your results

Here is some example code how to evaluate results:

from omnilabeltools import OmniLabel, OmniLabelEval

gt = OmniLabel(data_json_path)              # load ground truth dataset
dt = gt.load_res(res_json_path)             # load prediction results
ole = OmniLabelEval(gt, dt)
ole.params.recThrs = ...                    # set evaluation parameters as desired
ole.evaluate()
ole.accumulate()
ole.summarize()

We also provide a stand-alone script:

oleval --path-to-gt path/to/gt/json --path-to-res path/to/result/json

The result json file follows the format described above.

License

This project is released under an MIT License.

About

A Python toolkit for the OmniLabel benchmark providing code for evaluation and visualization

Resources

License

Stars

Watchers

Forks

Packages

No packages published