Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

About evaluation of the model #2

Closed
Limingxing00 opened this issue Jun 4, 2021 · 2 comments
Closed

About evaluation of the model #2

Limingxing00 opened this issue Jun 4, 2021 · 2 comments

Comments

@Limingxing00
Copy link

Limingxing00 commented Jun 4, 2021

Hi,

thank you for the nice work.

I have a concern about the evaluation of the model. Because there is no validation set to pick the best model. It may has a potential overfitting problem. (Or what should the validation set for interactive segmentation look like? If there is a unified standard, it will be more helpful for everyone to compare their methods.)

In interactive object segmentation setting, is this setting popular? I am new here for the interactive segmentation. Wish to solve my concern, thank you.

@hkchengrex
Copy link
Owner

This repo is an integral (and minor) part of MiVOS so I didn't do much evaluation. You can take a look at more mainstream interactive segmentation methods like f-BRS: https://github.com/saic-vul/fbrs_interactive_segmentation -- they mostly use synthetic user input and/or user study for evaluation.

@Limingxing00
Copy link
Author

Thank you for your quick reply!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants