Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

why is it that using the pre-training model you provided, without any changes in the code, the test results vary greatly, even up to 10 point fluctuations? #1

Open
one23sunnyQQ opened this issue Mar 8, 2021 · 9 comments

Comments

@one23sunnyQQ
Copy link

Hi, Thank you for sharing your code. But why is it that using the pre-training model you provided, without any changes in the code, the test results vary greatly, even up to 10 point fluctuations?May I ask how the test results provided in your paper can be determined as the final result when the performance fluctuates so much? Looking forward to your reply.

@liulu112601
Copy link
Owner

Hi, Thank you for sharing your code. But why is it that using the pre-training model you provided, without any changes in the code, the test results vary greatly, even up to 10 point fluctuations?May I ask how the test results provided in your paper can be determined as the final result when the performance fluctuates so much? Looking forward to your reply.

Hi there, thanks for your question. Can I ask which pretrained model do you use? I don't provide the pretrained model for URT. Do you mean the pretrained backbones provided from SUR?

@one23sunnyQQ
Copy link
Author

Yes, i used the pretrained backbones provided from SUR.

@liulu112601
Copy link
Owner

Yes, i used the pretrained backbones provided from SUR.

For SUR, please refer to their repo for more details: https://github.com/dvornikita/SUR

For URT, our result is evaluated on the average of three runs while I don't observe such big fluctuations of 10 percent.

Hope this answers your question.

@xialeiliu
Copy link

Do you have updated Traffic Sign results with fixed loader issue?

What we get using your repo for Traffic Sign with latest Meta_dataset loader is about 50%.

Could you please help to confirm that?

@sudarshan1994
Copy link

sudarshan1994 commented Jun 20, 2021

Hi,
Thank you for your contribution. I tried training URT using the pre-trained weights of SUR as instructed in your repo's readme. However I got results that were different from the reported results, I have pasted them below. Any help in clarifying this discrepancy would be much appreciated.

model \ data sur-paper sur-exp urt ok-06/10
ilsvrc_2012 56.30 +- 0.00 56.30 +- 0.00 58.75 +- 0.00 2.45
omniglot 93.10 +- 0.00 93.10 +- 0.00 75.17 +- 0.00 - 17.93
aircraft 85.40 +- 0.00 85.40 +- 0.00 94.00 +- 0.00 8.60
cu_birds 71.40 +- 0.00 71.40 +- 0.00 75.00 +- 0.00 3.60
dtd 71.50 +- 0.00 71.50 +- 0.00 86.00 +- 0.00 14.50
quickdraw 81.30 +- 0.00 81.30 +- 0.00 77.02 +- 0.00 -4.28
fungi 63.10 +- 0.00 63.10 +- 0.00 42.42 +- 0.00 -20.68
vgg_flower 82.80 +- 0.00 82.80 +- 0.00 92.22 +- 0.00 9.42
traffic_sign 70.40 +- 0.00 70.40 +- 0.00 90.00 +- 0.00 19.60
mscoco 52.40 +- 0.00 52.40 +- 0.00 52.22 +- 0.00 -0.18

@sudarshan1994
Copy link

I used the resnet features released in the repo and got the same results as in the paper for all the datasets except for traffic sign and MNIST. Thanks for releasing the features!

@liulu112601
Copy link
Owner

I used the resnet features released in the repo and got the same results as in the paper for all the datasets except for traffic sign and MNIST. Thanks for releasing the features!

Thanks for raising this issue. Just a kind reminder that because of a shuffling issue as described here: google-research/meta-dataset#54, the result has been affected especially for traffic signs and the new result has been updated in the open review system.

@sudarshan1994
Copy link

Yeah, I am aware of the bug, thanks for letting me know though, but I am still using the old buggy dataloader just to see if I can get the same results as you guys got. I am just trying to calibrate my meta-dataset setup with your code, I think there is something not right about my meta-dataset setup.

Would it be possible to release the tf-records you guys used ? Of course I understand that is a lot of work, but it would be super helpful. Any help would be much appreciated.

@sudarshan1994
Copy link

One more question: are the standard deviations mentioned in the paper calculated from 3 different runs or are they calculated from the 600 test tasks within a run ? Thank you for your time !

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants