Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Reproduction about NICO dataset #8

Open
Gaohan123 opened this issue Feb 7, 2022 · 5 comments
Open

Reproduction about NICO dataset #8

Gaohan123 opened this issue Feb 7, 2022 · 5 comments

Comments

@Gaohan123
Copy link

Thanks for your great work!

In your paper, you introduced results for NICO dataset. But in this repo, there is no dataset split file for it. I try to split the dataset according to the description in the paper. Then I try to reproduce experiments of baseline ResNet-18 and StableNet.

The results show that best accuracy of baseline ResNet-18 is 47.71 while in paper it is 51.71. The gap seems small. However, the best accuracy of StableNet is 48.20 while in paper it is 59.76, which is confusing.

I know there are some variance about randomness of data split and difference of hyperparameter tuning. Could you please provide the dataset split file of NICO and recommended hyperparameter setting for it? Thank you!

@Bigfishering
Copy link

前辈您好,可以加你的联系方式,请教一下复现的过程嘛!

@yangcong356
Copy link

@Gaohan123 I meet the same problem as yours. For now, have you solved this problem?

@Gaohan123
Copy link
Author

@Gaohan123 I meet the same problem as yours. For now, have you solved this problem?

Actually not...

@Jimmy-7664
Copy link

@Gaohan123 I met some problems while reproducing the result. How to get the dataset split? Could you provide an example of a structure of the split dataset? Looking forward to your reply

@Gaohan123
Copy link
Author

@Gaohan123 I met some problems while reproducing the result. How to get the dataset split? Could you provide an example of a structure of the split dataset? Looking forward to your reply

Personally speaking, I split the dataset following this way. For each class, randomly pick 2 domains as OOD test domain. Rest domains are training domains. In training domains, pick one as the dominant domain and preserve all samples of it. Then other domains are minor domains. I only preserve 20% of dominant domain for minor domains.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants