Skip to content
This repository has been archived by the owner on Jun 19, 2023. It is now read-only.

✏️ Web-based image segmentation tool for object detection and localization

License

Notifications You must be signed in to change notification settings

opencollective/coco-annotator

 
 

Repository files navigation

FeaturesWikiGetting StatedIssuesLicense


GitHub Stars GitHub Issues GitHub Code Quality Demo Build Status Docker Pulls

COCO Annotator is a web-based image annotation tool designed for versatility and efficiently label images to create training data for image localization and object detection. It provides many distinct features including the ability to label an image segment (or part of a segment), track object instances, labeling objects with disconnected visible parts, efficiently storing and export annotations in the well-know COCO format. The annotation process is delivered though an intuitive and customizable interface and provides many tools for creating accurate datasets.

Note: This video is from v0.1.0 and many new features have been added.

Features

Several annotation tools are currently available, with most applications as a desktop installation. Once installed, users can manually define regions in an image and creating a textual description. Generally, objects can be marked by a bounding box, either directly, through a masking tool, or by marking points to define the containing area. COCO Annotator allows users to annotate images using free-form curves or polygons and provides many additional features were other annotations tool fall short.

  • Directly export to COCO format
  • Segmentation of objects
  • Useful API endpoints to analyze data
  • Import datasets already annotated in COCO format
  • Annotated disconnected objects as a single instance
  • Labeling image segments with any number of labels simultaneously
  • Allow custom metadata for each instance or object
  • Magic wand/select tool
  • Generate datasets using google images
  • User authenication system

For examples and more information check out the wiki.

Demo

Login Information
Username: admin
Password: password

https://annotator.justinbrooks.ca/

Backers

Backed by the The Robotics Institute @ Guelph (GitHub)

Built With

Thanks to all these wonderful libaries/frameworks:

  • Flask - Python web microframework
  • Vue - Frontend javascript framework
  • Axios - Promise based HTTP client
  • PaperJS - Canvas editor library
  • Bootstrap - Frontend component library

License

MIT

About

✏️ Web-based image segmentation tool for object detection and localization

Resources

License

Stars

Watchers

Forks

Sponsor this project

Packages

No packages published

Languages

  • Vue 56.6%
  • Python 27.3%
  • JavaScript 14.2%
  • CSS 1.2%
  • Other 0.7%