New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Unsupervised Saliency Model #1
Comments
Hi @yucornetto, Thank you for the kind words. First of all, we will update the repository upon acceptance of the paper. After training, you need to threshold the output at 0.5 to obtain a binary mask. We additionally filter out images for which the area of the salient object is smaller than 1% of the total area. For more complex scenes, you might also apply connected components to get the non-overlapping object masks. However, this was not necessary for PASCAL. |
Hi @wvangansbeke @yucornetto, I'm also interested in applying the unsupervised saliency mask contrast to a different dataset. |
Just realized that the code for DeepUSPS is not linked to on the arxiv version but is present in the neurips version. Link: https://tinyurl.com/wtlhgo3 |
Hi @wvangansbeke , sorry to bother you again. I just checked the paper and code of DeepUSPS, and found that they seem to build the model based on a cityscapes-pretrained DRN (In sec 4.1, We use the DRN-network (Chen et al., 2018) which is pretrained on CityScapes (Cordts et al., 2016). , and also in their code DeepUSPS.py, Line 264 single_model = DRNSeg(args.arch, 2, None, pretrained=True) ). So it seems that DeepUSPS has used the labeled data of cityscapes. And that may make your method also benefit from labeled cityscapes data? My understanding is that maybe in unsupervised saliency detection task it is okay to claim "unsupervised" as long as no saliency label is used. But I feel that in SSL area people tend to avoid using any sort of label no matter it is related to the target dataset or not. I also found that in your paper, the backbone is initialized with MoCov2 pre-trained weight instead of ImageNet supervised pre-trained weight, so I assume that you are also trying to avoid using any sort of labeled data. Thus this situation confuses me. I am not sure if I misunderstand anything, thanks in advance for your time. |
Hi @yucornetto, In our repo we refer to the code of DeepUSPS to obtain the saliency estimator for now. |
I see, thanks again for the explanation and your great work! 👍 |
Hi, thanks so much for the great work, it is really interesting and inspiring. I wonder if you will provide the implementation and/or pretrained weight for the unsupervised saliency model, so that we can also generate saliency masks and try your method on other datasets besides PASCAL VOC?
The text was updated successfully, but these errors were encountered: