Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

What's the training clicking strategy? #18

Closed
sdfghjkyuio opened this issue Jul 8, 2020 · 14 comments
Closed

What's the training clicking strategy? #18

sdfghjkyuio opened this issue Jul 8, 2020 · 14 comments

Comments

@sdfghjkyuio
Copy link

In paper Interactive image segmentation via backpropagating refinement scheme, the user-annotations are imitated through a clustering strategy when they trained on the sbd dataset. I'm wondering if you have applied the same way to generate clicks when training?

@ksofiyuk
Copy link
Contributor

We simplify points sampling strategy in our experiments. We observed that if points are just sampled randomly, there is no difference in performance compared to more complicated sampling with points clustering. You can check our training sampling strategy in this class.

@sdfghjkyuio
Copy link
Author

I guess it's in the points_sampler.py🤔

@sdfghjkyuio
Copy link
Author

Omg we just commented at the same time! Thank you🤩

@ksofiyuk
Copy link
Contributor

Yeah, it's a funny coincidence 😃. Thank you for your interest in our work.

@sdfghjkyuio
Copy link
Author

I just met another problem😫: After I trained for one epoch on my own dataset, I checked the instance segmentation images in the experiment folder and found that some positive points (in green) appeared in the background area, which led to an error in predicting. How can I correct it? Thank you soooo much!

@ptrvilya
Copy link
Contributor

Could you please share a sample from visualization folder with erroneous point placement?

@sdfghjkyuio
Copy link
Author

206949690927739027
Like here, the blue points (background points) appear on the person's face (which is the foreground). So if I hope the network to learn to segment just one specific type from the image, I'm not sure if it can learn the characteristics of this type so well due to the wrong clicks during the training.

@ptrvilya
Copy link
Contributor

ptrvilya commented Jul 14, 2020

It seems to me that these points lie near the border of the ground-truth mask (on the background part of it). I guess these points appear erroneous due to errors in mask.

@ksofiyuk
Copy link
Contributor

ksofiyuk commented Jul 14, 2020

I combined your mask with the image and Ilia seems to be right. There are no any mistakes.
87434248-cd104180-c61c-11ea-9af7-6a6741e93f80

@sdfghjkyuio
Copy link
Author

Yes, the mask can match with the image based on the green and blue points. But I am confused whether the blue points are at the right location. I mean it seems that not all the blue points are sampled from the background pixels.

@ksofiyuk
Copy link
Contributor

There are no mistakes with points sampling.
If you want train a good interactive segmentation model, you need to train it on a dataset with nearly perfect segmentation masks. It seems that your dataset has some very coarse masks.

@sdfghjkyuio
Copy link
Author

Actually this image is one example from the sbd dataset...Thank you for your advice, I will check the masks with wrong sampled points.

@ksofiyuk
Copy link
Contributor

Yeah, we know, SBD is not good dataset. We use it only for academic research to compare with previous works.

@sdfghjkyuio
Copy link
Author

Thank you so much for answering my questions!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants