-
Notifications
You must be signed in to change notification settings - Fork 47
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
evaluate code #6
Comments
I have some questions below:
|
It is also confusing for me how the train set and validation set are used during training and the exact procedure to evaluate the model. From |
I've added evaluation code in this PR: #13 |
@SMHendryx Have you evaluate the author published model (in readme.md) https://drive.google.com/open?id=1Vk0Pq8vOZrfrDtCISMcJmAQnt9jkXfPn in one-shot setting? I have tested on my own evalutation script, and also applied your PR #13 script by setting args.k=1, both experiments give the mean IoU of ~ 0.80, which is ~0.07 higher than the paper reported (0.73). Could you please kindly let me know your reproduction of the one-shot setting? |
have you got that evaluation code? |
Would you please share that evaluation code with me ? |
Could you publish the evaluate code? I think this repo is a good pipeline for few-shot segmentation. Thanks! @HKUSTCV
The text was updated successfully, but these errors were encountered: