Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Direct evaluate coco-stuff model on ADE-20K #6

Open
Jeff-LiangF opened this issue Jun 27, 2022 · 3 comments
Open

Direct evaluate coco-stuff model on ADE-20K #6

Jeff-LiangF opened this issue Jun 27, 2022 · 3 comments

Comments

@Jeff-LiangF
Copy link

@dingjiansw101 Hi Jian, thanks for your great work! I am wondering did you happen to test your trained coco-stuff model directly on the ADE-20K dataset? Because in the concurrent works, like [1][2], they all report this transfer number. It is very interesting to compare your work with counterparts. Thanks!

[1] Xu, Mengde, et al. "A simple baseline for zero-shot semantic segmentation with pre-trained vision-language model." arXiv preprint arXiv:2112.14757 (2021).
[2] Ghiasi, Golnaz, et al. "Open-vocabulary image segmentation." arXiv preprint arXiv:2112.12143 (2021).

@dingjiansw101
Copy link
Owner

dingjiansw101 commented Jun 27, 2022

Yes, I have tried before. I checked an old log of a previous experiment. The coco-stuff->ade20k-150 generalization performance is 16.4 in mIoU. But I am not sure if it is the newest model. And I need to check the details, for the comparison with other methods. Of course, you can also test it by yourself, since we have released the models and codes.

@Jeff-LiangF
Copy link
Author

Thanks for your prompt help! It would be great if you can test your best model and report it so that the community can compare with your results. I'll try to test from my end. :)

@dingjiansw101
Copy link
Owner

Sure, I will update the results later.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants