Skip to content

tckrishna/colorchecker

Repository files navigation

Colorchecker

This repo has data, yaml and weights necessary to train colorscale checker using yolov3

This repository represents IdLab's open-source colorscale checker using object detection methods, to be used in cutural and hertiage domain. It incorporates lessons learned and best practices evolved over hours of training and evolution on manually labelled datasets. This repo is based on ultralytics yolo v3 pytorch implementatiion. All code and models are under active development, and are subject to modification or deletion without notice. Use at your own risk.

Requirements

Python 3.8 or later with all requirements.txt dependencies installed, including torch>=1.7. To install run:

$ pip install -r requirements.txt

Tutorials

Download Pretrained weights and dataset

Download the pretrained weights trained for ~300 epochs to directly use the model for inference.

To train the model, download colorchecker.yaml and colorchecker dataset. The dataset is a small 60-image dataset which is already split into train, test and evaluate. Train and test are used for training and validating the model, while evaluate is used to test the model after training.

The dataset and pretrained weights are present in the weights.zip and dataset.zip respeectively.

Inference

detect.py runs inference on a variety of sources, based on the model unzipped from the weights.zip and saving results to runs/detect.

$ python detect.py --source 0  # webcam
                            file.jpg  # image 
                            file.mp4  # video
                            path/  # directory
                            path/*.jpg  # glob
                            'https://youtu.be/NUsoVlDFqZg'  # YouTube video
                            'rtsp://example.com/media.mp4'  # RTSP, RTMP, HTTP stream

To run inference on example images in dataset/evaluate:

$ python detect.py --weights weights/best.pt --img 640 --conf 0.25 --source dataset/evaluate/

Use --crop True incase you want to delete/crop the detected colorscale from the picture.

$ python detect.py --weights weights/best.pt --img 640 --conf 0.25 --source dataset/evaluate/ --crop True

Training

Train the YOLOv3 model on colorchecker dataset with --data colorchecker.yaml, starting from pretrained --weights weights/last.pt, or from randomly initialized --weights '' --cfg yolov3.yaml. Models are downloaded automatically from the latest YOLOv3 release.

All training results are saved to runs/train/ with incrementing run directories, i.e. runs/train/exp2, runs/train/exp3 etc.

$ python train.py --img 640 --batch 16 --epochs 100 --data colorchecker.yaml --weights weights/last.pt --nosave --cache

About Us

Research group homepage: LINK

Over the past years, IDLab researchers of Ghent University have developed several building blocks for multimodal data processing, computer vision, NLP/NER, cross-collection linking, spatio-temporal enrichment and data mining that are predominetly used to address various real world problems.

In the domain of cultural heritage the team of prof. Steven Verstockt have built up a lot of expertise and (inter)national collaboration in the UGESCO, EURECA, Flore de Gand, CHANGE, DATA-KBR-BE, FAME and Museum in de Living projects.

Links to demonstrators of related projects:

  • DATA-KBR-BE (Document layout analysis, image similarity, NER)
  • Flore de Gand (Digitization, cross-collection linking, herbaria)
  • Eureca (HTR, document layout analysis, labeling tool)
  • Ugesco (Object detection, NER, crowdsourcing)

Contact

Issues should be raised directly in the repository. For business inquiries or professional support requests please visit https://s-team-ghent.github.io/ or email Prof. Steven Verstockt at steven.verstockt@ugent.be.

Citation

DOI

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published