Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

evaluate code #6

Open
2448845600 opened this issue Jun 2, 2020 · 6 comments
Open

evaluate code #6

2448845600 opened this issue Jun 2, 2020 · 6 comments

Comments

@2448845600
Copy link

Could you publish the evaluate code? I think this repo is a good pipeline for few-shot segmentation. Thanks! @HKUSTCV

@2448845600 2448845600 changed the title Standard test code evaluate code Jun 2, 2020
@Balabala-Hong
Copy link

I have some questions below:

  1. In your experiments, how many times did you split the trainval classes(760) to train(520) and val(240)?
  2. How do you evaluate your model on test classes? For example, is your one shot experiment result for a specific class calculated with each one of ten images as support image ?
    So can you provide your split classes of train classes(520) and val classes(240)? With your classes split result and evaluation methods, a fair comparision can be conducted for people. @HKUSTCV

@ZezhouCheng
Copy link

It is also confusing for me how the train set and validation set are used during training and the exact procedure to evaluate the model. From training.py, there is no procedure to conduct model selection or hyperparameter searching based on the validation set. Perhaps it is in the evaluation code which is not released. Could you provide the evaluation methods for a fair comparison?

@SMHendryx
Copy link

I've added evaluation code in this PR: #13

@jzsherlock4869
Copy link

jzsherlock4869 commented Mar 10, 2021

@SMHendryx Have you evaluate the author published model (in readme.md) https://drive.google.com/open?id=1Vk0Pq8vOZrfrDtCISMcJmAQnt9jkXfPn in one-shot setting? I have tested on my own evalutation script, and also applied your PR #13 script by setting args.k=1, both experiments give the mean IoU of ~ 0.80, which is ~0.07 higher than the paper reported (0.73). Could you please kindly let me know your reproduction of the one-shot setting?

@lilyandluc
Copy link

Could you publish the evaluate code? I think this repo is a good pipeline for few-shot segmentation. Thanks! @HKUSTCV

have you got that evaluation code?

@lilyandluc
Copy link

@SMHendryx Have you evaluate the author published model (in readme.md) https://drive.google.com/open?id=1Vk0Pq8vOZrfrDtCISMcJmAQnt9jkXfPn in one-shot setting? I have tested on my own evalutation script, and also applied your PR #13 script by setting args.k=1, both experiments give the mean IoU of ~ 0.80, which is ~0.07 higher than the paper reported (0.73). Could you please kindly let me know your reproduction of the one-shot setting?

Would you please share that evaluation code with me ?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants