CVAT is an interactive video and image annotation tool for computer vision. It is used by tens of thousands of users and companies around the world. CVAT is free and open-source.
A new repo: CVAT core team moved the active development of the tool to this new repository. Our mission is to help developers, companies and organizations around the world to solve real problems using the Data-centric AI approach.
Start using CVAT online for free: cvat.ai. Or set it up as a self-hosted solution: read here.
- Installation guide
- Manual
- Contributing
- Datumaro dataset framework
- Server API
- Python SDK
- Command line tool
- XML annotation format
- AWS Deployment Guide
- Frequently asked questions
- Where to ask questions
CVAT is used by teams all over the world. If you use us, please drop us a line at contact@cvat.ai - and we'll add you to this list.
- ATLANTIS, an open-source dataset for semantic segmentation of waterbody images, depeloped by iWERS group in the Department of Civil and Environmental Engineering at University of South Carolina, is using CVAT. For developing a semantic segmentation dataset using CVAT, please check ATLANTIS published article, ATLANTIS Development Kit and annotation tutorial videos.
- Onepanel is an open-source vision AI platform that fully integrates CVAT with scalable data processing and parallelized training pipelines.
- DataIsKey uses CVAT as their prime data labeling tool to offer annotation services for projects of any size.
- Human Protocol uses CVAT as a way of adding annotation service to the Human Protocol.
- Cogito Tech LLC, a Human-in-the-Loop Workforce Solutions Provider, used CVAT in annotation of about 5,000 images for a brand operating in the fashion segment.
- FiftyOne is an open-source dataset curation and model analysis tool for visualizing, exploring, and improving computer vision datasets and models that is tightly integrated with CVAT for annotation and label refinement.
CVAT online: cvat.ai
This is an online version of CVAT. It's free, efficient, and easy to use.
cvat.ai runs the latest version of the tool. You can create up to 10 tasks there and upload up to 500Mb of data to annotate. It will only be visible to you or people you assign to it.
For now, it does not have analytics features like management and monitoring the data annotation team.
We plan to enhance cvat.ai with new powerful features. Stay tuned!
Prebuilt docker images are the easiest way to start using CVAT locally. They are available on Docker Hub:
The images have been downloaded more than 1M times so far.
Here are some screencasts showing how to use CVAT.
- Introduction
- Annotation mode
- Interpolation of bounding boxes
- Interpolation of polygons
- Tag annotation video
- Attribute mode
- Segmentation mode
- Tutorial for polygons
- Semi-automatic segmentation
- Install with
pip install cvat-sdk
- PyPI package homepage
- Documentation
- Install with
pip install cvat-cli
- PyPI package homepage
- Documentation
CVAT supports multiple annotation formats. You can select the format after clicking the "Upload annotation" and "Dump annotation" buttons. Datumaro dataset framework allows additional dataset transformations via its command line tool and Python library.
For more information about the supported formats, look at the documentation.
Annotation format | Import | Export |
---|---|---|
CVAT for images | ✔️ | ✔️ |
CVAT for a video | ✔️ | ✔️ |
Datumaro | ✔️ | |
PASCAL VOC | ✔️ | ✔️ |
Segmentation masks from PASCAL VOC | ✔️ | ✔️ |
YOLO | ✔️ | ✔️ |
MS COCO Object Detection | ✔️ | ✔️ |
MS COCO Keypoints Detection | ✔️ | ✔️ |
TFrecord | ✔️ | ✔️ |
MOT | ✔️ | ✔️ |
LabelMe 3.0 | ✔️ | ✔️ |
ImageNet | ✔️ | ✔️ |
CamVid | ✔️ | ✔️ |
WIDER Face | ✔️ | ✔️ |
VGGFace2 | ✔️ | ✔️ |
Market-1501 | ✔️ | ✔️ |
ICDAR13/15 | ✔️ | ✔️ |
Open Images V6 | ✔️ | ✔️ |
Cityscapes | ✔️ | ✔️ |
KITTI | ✔️ | ✔️ |
LFW | ✔️ | ✔️ |
CVAT supports automatic labelling. It can speed up the annotation process up to 10x. Here is a list of the algorithms we support, and the platforms they can be ran on:
Name | Type | Framework | CPU | GPU |
---|---|---|---|---|
Deep Extreme Cut | interactor | OpenVINO | ✔️ | |
Faster RCNN | detector | OpenVINO | ✔️ | |
Mask RCNN | detector | OpenVINO | ✔️ | |
YOLO v3 | detector | OpenVINO | ✔️ | |
Object reidentification | reid | OpenVINO | ✔️ | |
Semantic segmentation for ADAS | detector | OpenVINO | ✔️ | |
Text detection v4 | detector | OpenVINO | ✔️ | |
YOLO v5 | detector | PyTorch | ✔️ | |
SiamMask | tracker | PyTorch | ✔️ | ✔️ |
f-BRS | interactor | PyTorch | ✔️ | |
HRNet | interactor | PyTorch | ✔️ | |
Inside-Outside Guidance | interactor | PyTorch | ✔️ | |
Faster RCNN | detector | TensorFlow | ✔️ | ✔️ |
Mask RCNN | detector | TensorFlow | ✔️ | ✔️ |
RetinaNet | detector | PyTorch | ✔️ | ✔️ |
Face Detection | detector | OpenVINO | ✔️ |
The code is released under the MIT License.
This software uses LGPL licensed libraries from the FFmpeg project. The exact steps on how FFmpeg was configured and compiled can be found in the Dockerfile.
FFmpeg is an open source framework licensed under LGPL and GPL. See https://www.ffmpeg.org/legal.html. You are solely responsible for determining if your use of FFmpeg requires any additional licenses. CVAT.ai Corporation is not responsible for obtaining any such licenses, nor liable for any licensing fees due in connection with your use of FFmpeg.
Gitter chat: you can post CVAT usage related questions there. Typically they get answered fast by the core team or community. There you can also browse other common questions.
Discord is the place to also ask questions or discuss any other stuff related to CVAT.
GitHub issues: please post them for feature requests or bug reports. If it's a bug, please add the steps to reproduce it.
#cvat tag on StackOverflow is one more way to ask questions and get our support.
contact@cvat.ai: reach out to us with feedback, comments, or inquiries.