Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Jagged edges in the output the masks #25

Closed
vanga opened this issue Jun 17, 2022 · 14 comments
Closed

Jagged edges in the output the masks #25

vanga opened this issue Jun 17, 2022 · 14 comments

Comments

@vanga
Copy link

vanga commented Jun 17, 2022

Hi,

Config: mask_rcnn_R_101_FPN_3x_deform.yaml
checkpoint: output_3x_transfiner_r101_deform.pth

All other settings are defaults from the yaml config.

I am doing prediction like this (I used the demo.py script as wel and the results are same). I don't see any errors/warning in the logs

im = cv2.imread(src_img_path)
predictor = DefaultPredictor(cfg)
outputs = predictor(im)

I see that the edges of the masks are not smooth even for the sample images in the repo. Is this expected? I am askng this because when I look at the gifs in the readme, they seem to have smooth edges. so, I was expecting smooth edges.

000000131444-out

000000126137-out
000000008844-out

000000157365-out

@lkeab
Copy link
Collaborator

lkeab commented Jun 17, 2022

Hi, have this line triggered during the inference process? You can try to increase the dilation kernel range to reduce the artifacts. This is the result of the woman running on my local machine.
image

@vanga
Copy link
Author

vanga commented Jun 18, 2022

Hi @lkeab without the background removed, these edges are not noticeable. I don't see the jagged edges even with my inference, atleast they are not noticeable this way I think.
000000008844

I simply do this to remove the background, so I don't think it is an issue with the background removal process.

from PIL import Image
img = Image.open(src_img_path)

np_img = np.copy(np.asarray(img))
neg_mask = ~mask
np_img[neg_mask] = 255
plt.imshow(np_img)

I wasn't setting VIS_PERIOD if it is not the default. I have set it now and I can confirm that the code is going to dilating step. Though, I am not sure if the output is any different.. I also tried with 7x7 kernel.. not much improvement.

000000008844-out

I see below logs during inference, not sure if they are of any significance

fpn input shapes: {'res2': ShapeSpec(channels=256, height=None, width=None, stride=4), 'res3': ShapeSpec(channels=512, height=None, width=None, stride=8), 'res4': ShapeSpec(channels=1024, height=None, width=None, stride=16), 'res5': ShapeSpec(channels=2048, height=None, width=None, stride=32)}
/home/ubuntu/vangap/transfiner/detectron2/structures/image_list.py:99: UserWarning: __floordiv__ is deprecated, and its behavior will change in a future version of pytorch. It currently rounds toward 0 (like the 'trunc' function NOT 'floor'). This results in incorrect rounding for negative values. To keep the current behavior, use torch.div(a, b, rounding_mode='trunc'), or for actual floor division, use torch.div(a, b, rounding_mode='floor').
  max_size = (max_size + (stride - 1)) // stride * stride
/home/ubuntu/miniconda3/envs/transfiner/lib/python3.7/site-packages/torch/functional.py:568: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at  ../aten/src/ATen/native/TensorShape.cpp:2228.)
  return _VF.meshgrid(tensors, **kwargs)  # type: ignore[attr-defined]
dilating---------------------------------------
/home/ubuntu/vangap/transfiner/detectron2/modeling/roi_heads/mask_head.py:60: UserWarning: __floordiv__ is deprecated, and its behavior will change in a future version of pytorch. It currently rounds toward 0 (like the 'trunc' function NOT 'floor'). This results in incorrect rounding for negative values. To keep the current behavior, use torch.div(a, b, rounding_mode='trunc'), or for actual floor division, use torch.div(a, b, rounding_mode='floor').
  dim_t = temperature ** (2 * (dim_t // 2) / num_pos_feats)

@lkeab
Copy link
Collaborator

lkeab commented Jun 18, 2022

I did't observe this warning when running my code.

@vanga
Copy link
Author

vanga commented Jun 18, 2022

Could there be an issue with the pytorch/cuda versions?
I am using all the latest versions, and on A100.

@lkeab
Copy link
Collaborator

lkeab commented Jun 18, 2022

Not sure. I am using the NVIDIA TITAN RTX, Pytorch 1.7.1 and Cuda 11.0.

@vanga
Copy link
Author

vanga commented Jun 19, 2022

If you don't mind, is it possible to provide mask object for one of these images(and details on which model and setttings are used) or provide an option in huggingspaces to download it?

I would like to compare it with what I am seeing and confirm if it's an env issue or something else.

@lkeab
Copy link
Collaborator

lkeab commented Jun 20, 2022

Hi, I am currently out for a traveling. I think one validation method is to that you report your AP results for our pretrained model (for example r50-fpn-1x) running on your machine. If it is exactly the same, then I think it can exclude the environment factors.

@lkeab lkeab closed this as completed Jun 22, 2022
@anl13
Copy link

anl13 commented Jun 24, 2022

Hi @lkeab without the background removed, these edges are not noticeable. I don't see the jagged edges even with my inference, atleast they are not noticeable this way I think. 000000008844

I simply do this to remove the background, so I don't think it is an issue with the background removal process.

from PIL import Image
img = Image.open(src_img_path)

np_img = np.copy(np.asarray(img))
neg_mask = ~mask
np_img[neg_mask] = 255
plt.imshow(np_img)

I wasn't setting VIS_PERIOD if it is not the default. I have set it now and I can confirm that the code is going to dilating step. Though, I am not sure if the output is any different.. I also tried with 7x7 kernel.. not much improvement.

000000008844-out

I see below logs during inference, not sure if they are of any significance

fpn input shapes: {'res2': ShapeSpec(channels=256, height=None, width=None, stride=4), 'res3': ShapeSpec(channels=512, height=None, width=None, stride=8), 'res4': ShapeSpec(channels=1024, height=None, width=None, stride=16), 'res5': ShapeSpec(channels=2048, height=None, width=None, stride=32)}
/home/ubuntu/vangap/transfiner/detectron2/structures/image_list.py:99: UserWarning: __floordiv__ is deprecated, and its behavior will change in a future version of pytorch. It currently rounds toward 0 (like the 'trunc' function NOT 'floor'). This results in incorrect rounding for negative values. To keep the current behavior, use torch.div(a, b, rounding_mode='trunc'), or for actual floor division, use torch.div(a, b, rounding_mode='floor').
  max_size = (max_size + (stride - 1)) // stride * stride
/home/ubuntu/miniconda3/envs/transfiner/lib/python3.7/site-packages/torch/functional.py:568: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at  ../aten/src/ATen/native/TensorShape.cpp:2228.)
  return _VF.meshgrid(tensors, **kwargs)  # type: ignore[attr-defined]
dilating---------------------------------------
/home/ubuntu/vangap/transfiner/detectron2/modeling/roi_heads/mask_head.py:60: UserWarning: __floordiv__ is deprecated, and its behavior will change in a future version of pytorch. It currently rounds toward 0 (like the 'trunc' function NOT 'floor'). This results in incorrect rounding for negative values. To keep the current behavior, use torch.div(a, b, rounding_mode='trunc'), or for actual floor division, use torch.div(a, b, rounding_mode='floor').
  dim_t = temperature ** (2 * (dim_t // 2) / num_pos_feats)

I encountered the same warnings! My env setting: ubuntu 20.04, python 3.7.13, pytorch 1.11.0, cuda 11.3.

@vanga
Copy link
Author

vanga commented Jun 27, 2022

@anl13 Do you also see the jagged edges?

@anl13
Copy link

anl13 commented Jun 27, 2022

@vanga Yes, I observed exactly the same results. In fact, I have removed these warnings by replacing a//b with torch.div(a, b, rounding_mode='floor'). However, there is no help. See result below. By the way, my GPU for test is Nvidia Titan X.
demo

@vanga
Copy link
Author

vanga commented Jun 28, 2022

@lkeab Can we keep this open until thee is a closure? considering it is not just me?

w.r.t checking AP results, could you please share some pointers on how I can do that?

@lkeab
Copy link
Collaborator

lkeab commented Jun 29, 2022

Hi, for checking AP results, you can report the inference results on coco val set using our provided pretrained model, such as R50-FPN-1x or 3X. I will also look into this to see whether can help fix it. It's not that noticeable when drawing masks directly on images.

@lkeab
Copy link
Collaborator

lkeab commented Jun 29, 2022

Hi, we also updated the code. Please try the latest version to see whether it helps or not. These are the results obtained by me.
test_img_819
test_img_785
test_img_454
test_img_464
test_img_924

@anl13
Copy link

anl13 commented Jun 30, 2022

Here are my results with updated code. There are improvements compared with previous code. I test three pretrained models for human segmentation.

model: output_3x_transfiner_r101_deform.pth:
output000000008844
output000000126137
output000000131444
output000000157365

model: output_3x_transfiner_r101.pth:
output2000000008844
output2000000126137
output2000000131444
output2000000157365

model: output_3x_transfiner_r50.pth:
output3000000008844
output3000000126137
output3000000131444
output3000000157365

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants