Skip to content

Commit

Permalink
Replace mask format support with Datumaro (#1163)
Browse files Browse the repository at this point in the history
* Add box to mask transform

* Fix 'source' labelmap mode in voc converter

* Import groups

* Replace mask format support

* Update mask format documentation

* codacy

* Fix tests

* Fix dataset

* Fix segments grouping

* Merge instances in mask export
  • Loading branch information
zhiltsov-max committed Feb 21, 2020
1 parent 80d3f97 commit 8caa169
Show file tree
Hide file tree
Showing 14 changed files with 301 additions and 160 deletions.
23 changes: 11 additions & 12 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -38,18 +38,17 @@ Format selection is possible after clicking on the Upload annotation / Dump anno
[Datumaro](datumaro/README.md) dataset framework allows additional dataset transformations
via its command line tool.

| Annotation format | Dumper | Loader |
| ---------------------------------------------------------------------------------- | ------ | ------ |
| [CVAT XML v1.1 for images](cvat/apps/documentation/xml_format.md#annotation) | X | X |
| [CVAT XML v1.1 for a video](cvat/apps/documentation/xml_format.md#interpolation) | X | X |
| [Pascal VOC](http://host.robots.ox.ac.uk/pascal/VOC/) | X | X |
| [YOLO](https://pjreddie.com/darknet/yolo/) | X | X |
| [MS COCO Object Detection](http://cocodataset.org/#format-data) | X | X |
| PNG mask | X | |
| PNG instance mask | X | |
| [TFrecord](https://www.tensorflow.org/tutorials/load_data/tf_records) | X | X |
| [MOT](https://motchallenge.net/) | X | X |
| [LabelMe](http://labelme.csail.mit.edu/Release3.0) | X | X |
| Annotation format | Dumper | Loader |
| ------------------------------------------------------------------------------------------ | ------ | ------ |
| [CVAT XML v1.1 for images](cvat/apps/documentation/xml_format.md#annotation) | X | X |
| [CVAT XML v1.1 for a video](cvat/apps/documentation/xml_format.md#interpolation) | X | X |
| [Pascal VOC](http://host.robots.ox.ac.uk/pascal/VOC/) | X | X |
| [YOLO](https://pjreddie.com/darknet/yolo/) | X | X |
| [MS COCO Object Detection](http://cocodataset.org/#format-data) | X | X |
| PNG class mask + instance mask as in [Pascal VOC](http://host.robots.ox.ac.uk/pascal/VOC/) | X | X |
| [TFrecord](https://www.tensorflow.org/tutorials/load_data/tf_records) | X | X |
| [MOT](https://motchallenge.net/) | X | X |
| [LabelMe](http://labelme.csail.mit.edu/Release3.0) | X | X |

## Links
- [Intel AI blog: New Computer Vision Tool Accelerates Annotation of Digital Images and Video](https://www.intel.ai/introducing-cvat)
Expand Down
44 changes: 37 additions & 7 deletions cvat/apps/annotation/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -506,18 +506,48 @@ python create_pascal_tf_record.py --data_dir <path to VOCdevkit> --set train --y
- downloaded file: a zip archive with the following structure:
```bash
taskname.zip
├── frame_000001.png
├── frame_000002.png
├── frame_000003.png
├── ...
└── colormap.txt
├── labelmap.txt # optional, required for non-VOC labels
├── ImageSets/
│   └── Segmentation/
│   └── default.txt # list of image names without extension
├── SegmentationClass/ # merged class masks
│   └── image1.png
│   └── image2.png
└── SegmentationObject/ # merged instance masks
└── image1.png
└── image2.png
```
Mask is a png image with several (RGB) channels where each pixel has own color which corresponds to a label.
Color generation correspond to the Pascal VOC color generation
[algorithm](http://host.robots.ox.ac.uk/pascal/VOC/voc2012/htmldoc/devkit_doc.html#sec:voclabelcolormap).
(0, 0, 0) is used for background.
`colormap.txt` file contains the values of the used colors in RGB format.
`labelmap.txt` file contains the values of the used colors in RGB format. The file structure:
```bash
# label:color_rgb:parts:actions
background:0,128,0::
aeroplane:10,10,128::
bicycle:10,128,0::
bird:0,108,128::
boat:108,0,100::
bottle:18,0,8::
bus:12,28,0::
```
- supported shapes - Rectangles, Polygons

#### Mask loader description
Not supported
- uploaded file: a zip archive of the following structure:
```bash
name.zip
├── labelmap.txt # optional, required for non-VOC labels
├── ImageSets/
│   └── Segmentation/
│   └── <any_subset_name>.txt
├── SegmentationClass/
│   └── image1.png
│   └── image2.png
└── SegmentationObject/
└── image.png
└── image2.png
```
- supported shapes: Polygons
- additional comments: the CVAT task should be created with the full label set that may be in the annotation files
166 changes: 48 additions & 118 deletions cvat/apps/annotation/mask.py
Original file line number Diff line number Diff line change
Expand Up @@ -6,130 +6,60 @@
"name": "MASK",
"dumpers": [
{
"display_name": "{name} (by class) {format} {version}",
"display_name": "{name} {format} {version}",
"format": "ZIP",
"version": "1.0",
"handler": "dump_by_class"
"version": "1.1",
"handler": "dump",
},
],
"loaders": [
{
"display_name": "{name} (by instance) {format} {version}",
"display_name": "{name} {format} {version}",
"format": "ZIP",
"version": "1.0",
"handler": "dump_by_instance"
"version": "1.1",
"handler": "load",
},
],
"loaders": [
],
}

MASK_BY_CLASS = 0
MASK_BY_INSTANCE = 1

def convert_box_to_polygon(shape):
xtl = shape.points[0]
ytl = shape.points[1]
xbr = shape.points[2]
ybr = shape.points[3]

return [xtl, ytl, xbr, ytl, xbr, ybr, xtl, ybr]

def create_mask_colorizer(annotations, colorize_type):
import numpy as np
from collections import OrderedDict

class MaskColorizer:

def __init__(self, annotations, colorize_type):

if colorize_type == MASK_BY_CLASS:
self.colors = self.gen_class_mask_colors(annotations)
elif colorize_type == MASK_BY_INSTANCE:
self.colors = self.gen_instance_mask_colors()

def generate_pascal_colormap(self, size=256):
# RGB format, (0, 0, 0) used for background
colormap = np.zeros((size, 3), dtype=int)
ind = np.arange(size, dtype=int)

for shift in reversed(range(8)):
for channel in range(3):
colormap[:, channel] |= ((ind >> channel) & 1) << shift
ind >>= 3

return colormap

def gen_class_mask_colors(self, annotations):
colormap = self.generate_pascal_colormap()
labels = [label[1]["name"] for label in annotations.meta["task"]["labels"] if label[1]["name"] != 'background']
labels.insert(0, 'background')
label_colors = OrderedDict((label, colormap[idx]) for idx, label in enumerate(labels))

return label_colors

def gen_instance_mask_colors(self):
colormap = self.generate_pascal_colormap()
# The first color is black
instance_colors = OrderedDict((idx, colormap[idx]) for idx in range(len(colormap)))

return instance_colors

return MaskColorizer(annotations, colorize_type)

def dump(file_object, annotations, colorize_type):

from zipfile import ZipFile, ZIP_STORED
import numpy as np
import os
from pycocotools import mask as maskUtils
import matplotlib.image
import io

colorizer = create_mask_colorizer(annotations, colorize_type=colorize_type)
if colorize_type == MASK_BY_CLASS:
save_dir = "SegmentationClass"
elif colorize_type == MASK_BY_INSTANCE:
save_dir = "SegmentationObject"

with ZipFile(file_object, "w", ZIP_STORED) as output_zip:
for frame_annotation in annotations.group_by_frame():
image_name = frame_annotation.name
annotation_name = "{}.png".format(os.path.splitext(os.path.basename(image_name))[0])
width = frame_annotation.width
height = frame_annotation.height

shapes = frame_annotation.labeled_shapes
# convert to mask only rectangles and polygons
shapes = [shape for shape in shapes if shape.type == 'rectangle' or shape.type == 'polygon']
if not shapes:
continue
shapes = sorted(shapes, key=lambda x: int(x.z_order))
img_mask = np.zeros((height, width, 3))
buf_mask = io.BytesIO()
for shape_index, shape in enumerate(shapes):
points = shape.points if shape.type != 'rectangle' else convert_box_to_polygon(shape)
rles = maskUtils.frPyObjects([points], height, width)
rle = maskUtils.merge(rles)
mask = maskUtils.decode(rle)
idx = (mask > 0)
# get corresponding color
if colorize_type == MASK_BY_CLASS:
color = colorizer.colors[shape.label] / 255
elif colorize_type == MASK_BY_INSTANCE:
color = colorizer.colors[shape_index+1] / 255

img_mask[idx] = color

# write mask
matplotlib.image.imsave(buf_mask, img_mask, format='png')
output_zip.writestr(os.path.join(save_dir, annotation_name), buf_mask.getvalue())
# Store color map for each class
labels = '\n'.join('{}:{}'.format(label, ','.join(str(i) for i in color)) for label, color in colorizer.colors.items())
output_zip.writestr('colormap.txt', labels)

def dump_by_class(file_object, annotations):

return dump(file_object, annotations, MASK_BY_CLASS)

def dump_by_instance(file_object, annotations):

return dump(file_object, annotations, MASK_BY_INSTANCE)
def dump(file_object, annotations):
from cvat.apps.dataset_manager.bindings import CvatAnnotationsExtractor
from cvat.apps.dataset_manager.util import make_zip_archive
from datumaro.components.project import Environment, Dataset
from tempfile import TemporaryDirectory

env = Environment()
polygons_to_masks = env.transforms.get('polygons_to_masks')
boxes_to_masks = env.transforms.get('boxes_to_masks')
merge_instance_segments = env.transforms.get('merge_instance_segments')
id_from_image = env.transforms.get('id_from_image_name')

extractor = CvatAnnotationsExtractor('', annotations)
extractor = extractor.transform(polygons_to_masks)
extractor = extractor.transform(boxes_to_masks)
extractor = extractor.transform(merge_instance_segments)
extractor = extractor.transform(id_from_image)
extractor = Dataset.from_extractors(extractor) # apply lazy transforms
converter = env.make_converter('voc_segmentation',
apply_colormap=True, label_map='source')
with TemporaryDirectory() as temp_dir:
converter(extractor, save_dir=temp_dir)
make_zip_archive(temp_dir, file_object)

def load(file_object, annotations):
from pyunpack import Archive
from tempfile import TemporaryDirectory
from datumaro.plugins.voc_format.importer import VocImporter
from datumaro.components.project import Environment
from cvat.apps.dataset_manager.bindings import import_dm_annotations

archive_file = file_object if isinstance(file_object, str) else getattr(file_object, "name")
with TemporaryDirectory() as tmp_dir:
Archive(archive_file).extractall(tmp_dir)

dm_project = VocImporter()(tmp_dir)
dm_dataset = dm_project.make_dataset()
masks_to_polygons = Environment().transforms.get('masks_to_polygons')
dm_dataset = dm_dataset.transform(masks_to_polygons)
import_dm_annotations(dm_dataset, annotations)
17 changes: 17 additions & 0 deletions cvat/apps/dataset_manager/bindings.py
Original file line number Diff line number Diff line change
Expand Up @@ -211,6 +211,22 @@ def import_dm_annotations(dm_dataset, cvat_task_anno):
for item in dm_dataset:
frame_number = match_frame(item, cvat_task_anno)

# do not store one-item groups
group_map = { 0: 0 }
group_size = { 0: 0 }
for ann in item.annotations:
if ann.type in shapes:
group = group_map.get(ann.group)
if group is None:
group = len(group_map)
group_map[ann.group] = group
group_size[ann.group] = 1
else:
group_size[ann.group] += 1
group_map = {g: s for g, s in group_size.items()
if 1 < s and group_map[g]}
group_map = {g: i for i, g in enumerate([0] + sorted(group_map))}

for ann in item.annotations:
if ann.type in shapes:
cvat_task_anno.add_shape(cvat_task_anno.LabeledShape(
Expand All @@ -219,5 +235,6 @@ def import_dm_annotations(dm_dataset, cvat_task_anno):
label=label_cat.items[ann.label].name,
points=ann.points,
occluded=False,
group=group_map.get(ann.group, 0),
attributes=[],
))
8 changes: 5 additions & 3 deletions cvat/apps/engine/tests/test_rest_api.py
Original file line number Diff line number Diff line change
Expand Up @@ -2658,9 +2658,9 @@ def _get_initial_annotation(annotation_format):
elif annotation_format == "COCO JSON 1.0":
annotations["shapes"] = polygon_shapes_wo_attrs

elif annotation_format == "MASK ZIP 1.0":
annotations["shapes"] = rectangle_shapes_with_attrs + rectangle_shapes_wo_attrs + polygon_shapes_wo_attrs
annotations["tracks"] = rectangle_tracks_with_attrs + rectangle_tracks_wo_attrs
elif annotation_format == "MASK ZIP 1.1":
annotations["shapes"] = rectangle_shapes_wo_attrs + polygon_shapes_wo_attrs
annotations["tracks"] = rectangle_tracks_wo_attrs

elif annotation_format == "MOT CSV 1.0":
annotations["tracks"] = rectangle_tracks_wo_attrs
Expand Down Expand Up @@ -2730,6 +2730,8 @@ def _get_initial_annotation(annotation_format):
}

for loader in annotation_format["loaders"]:
if loader["display_name"] == "MASK ZIP 1.1":
continue # can't really predict the result and check
response = self._upload_api_v1_tasks_id_annotations(task["id"], annotator, uploaded_data, "format={}".format(loader["display_name"]))
self.assertEqual(response.status_code, HTTP_202_ACCEPTED)

Expand Down
14 changes: 7 additions & 7 deletions datumaro/datumaro/components/project.py
Original file line number Diff line number Diff line change
Expand Up @@ -321,17 +321,17 @@ def from_extractors(cls, *sources):
subsets = defaultdict(lambda: Subset(dataset))
for source in sources:
for item in source:
path = None # NOTE: merge everything into our own dataset

existing_item = subsets[item.subset].items.get(item.id)
if existing_item is not None:
item = self._merge_items(existing_item, item, path=path)
else:
item = item.wrap(path=path, annotations=item.annotations)
path = existing_item.path
if item.path != path:
path = None
item = cls._merge_items(existing_item, item, path=path)

subsets[item.subset].items[item.id] = item

self._subsets = dict(subsets)
dataset._subsets = dict(subsets)
return dataset

def __init__(self, categories=None):
super().__init__()
Expand Down Expand Up @@ -419,7 +419,7 @@ def _merge_items(cls, existing_item, current_item, path=None):
image._path = current_item.image.path

if all([existing_item.image._size, current_item.image._size]):
assert existing_item.image._size == current_item.image._size, "Image info differs for item '%s'" % item.id
assert existing_item.image._size == current_item.image._size, "Image info differs for item '%s'" % existing_item.id
elif existing_item.image._size:
image._size = existing_item.image._size
else:
Expand Down
1 change: 1 addition & 0 deletions datumaro/datumaro/plugins/coco_format/converter.py
Original file line number Diff line number Diff line change
Expand Up @@ -361,6 +361,7 @@ def save_annotations(self, item):

@classmethod
def find_solitary_points(cls, annotations):
annotations = sorted(annotations, key=lambda a: a.group)
solitary_points = []

for g_id, group in groupby(annotations, lambda a: a.group):
Expand Down
21 changes: 21 additions & 0 deletions datumaro/datumaro/plugins/transforms.py
Original file line number Diff line number Diff line change
Expand Up @@ -181,6 +181,27 @@ def convert_polygon(polygon, img_h, img_w):
return RleMask(rle=rle, label=polygon.label, z_order=polygon.z_order,
id=polygon.id, attributes=polygon.attributes, group=polygon.group)

class BoxesToMasks(Transform, CliPlugin):
def transform_item(self, item):
annotations = []
for ann in item.annotations:
if ann.type == AnnotationType.bbox:
if not item.has_image:
raise Exception("Image info is required for this transform")
h, w = item.image.size
annotations.append(self.convert_bbox(ann, h, w))
else:
annotations.append(ann)

return self.wrap_item(item, annotations=annotations)

@staticmethod
def convert_bbox(bbox, img_h, img_w):
rle = mask_utils.frPyObjects([bbox.as_polygon()], img_h, img_w)[0]

return RleMask(rle=rle, label=bbox.label, z_order=bbox.z_order,
id=bbox.id, attributes=bbox.attributes, group=bbox.group)

class MasksToPolygons(Transform, CliPlugin):
def transform_item(self, item):
annotations = []
Expand Down
Loading

0 comments on commit 8caa169

Please sign in to comment.