You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Good Day,
I am using DeepLabv3 Pytorch model for a semantic segmentation task and I would like to integrate this model to the backend of CVAT so that I can perform semi-automated labelling on new images. Let's just say the trained DeepLabv3 model gives an RGB-Mask output (same height and width as input). If I need to use this prediction in CVAT, I need to convert from RGB pixel-mask to polygon-mask. In order to find the polygons, we perform OpenCV's findContours function. But since these polygons are approximated, we'll have gaps in between them as shown below.
Sometimes these gaps could be so small that the human labeler might not notice them during validation. This is obviously a problem when training a new model with such labels. Semantic segmentation models expect all the pixels to have a class assigned to them. This is not the case here.
My question is, does CVAT have any function that I could use in order to prevent these gaps from appearing when converting from pixel-based mask to polygon-based mask? If not, do you have any tips on how to avoid this issue?
Thanks a lot!
Steps to Reproduce (for bugs)
Do a forward propagation on a trained semantic segmentation model
Convert the model output (RGB-mask) into polygons (so that it is compatible with CVAT Annotation format)
Upload the annotations to CVAT and view the predicted-labels
Context
Trying to generate the semi-automated labels for semantic segmentation without any "empty spaces". This is because the semantic segmentation models require every pixel to have one class assigned to them.
Your Environment
Operating System and version (e.g. Linux, Windows, MacOS): Windows
The text was updated successfully, but these errors were encountered:
We are working on integration Paint & Brush tools to CVAT, to support not only Polygonal masks, but also pixel-based masks.
Now there are not ways to prevent such approximation-related issues.
We are working on integration Paint & Brush tools to CVAT, to support not only Polygonal masks, but also pixel-based masks. Now there are not ways to prevent such approximation-related issues.
My actions before raising this issue
Good Day,
I am using DeepLabv3 Pytorch model for a semantic segmentation task and I would like to integrate this model to the backend of CVAT so that I can perform semi-automated labelling on new images. Let's just say the trained DeepLabv3 model gives an RGB-Mask output (same height and width as input). If I need to use this prediction in CVAT, I need to convert from RGB pixel-mask to polygon-mask. In order to find the polygons, we perform OpenCV's
findContours
function. But since these polygons are approximated, we'll have gaps in between them as shown below.Sometimes these gaps could be so small that the human labeler might not notice them during validation. This is obviously a problem when training a new model with such labels. Semantic segmentation models expect all the pixels to have a class assigned to them. This is not the case here.
My question is, does CVAT have any function that I could use in order to prevent these gaps from appearing when converting from pixel-based mask to polygon-based mask? If not, do you have any tips on how to avoid this issue?
Thanks a lot!
Steps to Reproduce (for bugs)
Context
Trying to generate the semi-automated labels for semantic segmentation without any "empty spaces". This is because the semantic segmentation models require every pixel to have one class assigned to them.
Your Environment
The text was updated successfully, but these errors were encountered: