Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Not able to reproduce Open Set numbers #24

Closed
roysubhankar opened this issue Oct 14, 2021 · 7 comments
Closed

Not able to reproduce Open Set numbers #24

roysubhankar opened this issue Oct 14, 2021 · 7 comments

Comments

@roysubhankar
Copy link

Hi,

Thanks for making the code public.

I tried to reproduce the numbers for open set setting of OfficeHome and the numbers I get are way less than what you report in the paper. I have already tried several torch and torchvision environments but every environment is giving the lower numbers.

Is it possible for you to upload the model checkpoints for open set source only? Then I hope to reproduce the numbers for SHOT-IM and SHOT with your source only checkpoints (source_F.pt, source_B.pt and source_C.pt). It will indeed be very helpful.

Thanks in advance.

@tim-learn
Copy link
Owner

Hi,

Thanks for making the code public.

I tried to reproduce the numbers for open set setting of OfficeHome and the numbers I get are way less than what you report in the paper. I have already tried several torch and torchvision environments but every environment is giving the lower numbers.

Is it possible for you to upload the model checkpoints for open set source only? Then I hope to reproduce the numbers for SHOT-IM and SHOT with your source only checkpoints (source_F.pt, source_B.pt and source_C.pt). It will indeed be very helpful.

Thanks in advance.

Hi, Roy, so the source only model you trained behaved worse than that in our paper? If so, I would send these models to your email.

Best

@roysubhankar
Copy link
Author

Hi @tim-learn , yes the source models behaved worse than in your paper. If you could please me to this email subhankar.roy@unitn.it then it will be great. Thank you.

@roysubhankar
Copy link
Author

Hi @tim-learn , a gentle reminder to please send me the checkpoints to the above email address as we discussed.

@tim-learn
Copy link
Owner

Hi @tim-learn , a gentle reminder to please send me the checkpoints to the above email address as we discussed.

Hi, Roy. Sorry for the delay. I have re-trained SHOT today and the average accuracy of ODA (OfficeHome) is 73.0%. The associated models are uploaded in https://drive.google.com/drive/folders/14GIyQ-Dj7Mr8_FJdPl4EBhFMgxQ2LXnq. You can try again and tell me whether it works well for you.

Best

@roysubhankar
Copy link
Author

roysubhankar commented Oct 26, 2021

Hi @tim-learn , thank you for sending the checkpoints.

I used your source trained checkpoints and simply used them to compute the source-only numbers for ODA. The numbers I get are very poor (almost like random guess). I am using the default run command for ODA.
Attaching it below.
Screenshot 2021-10-26 at 14 55 34

I am not sure why the numbers are so bad. When I was training my own model, the numbers were better (however not close to what you report but decent numbers). I was wondering if there is some difference in the dataset list.txt or something different in the installed packages.

Is it possible to share the dataset_list.txt for each domain of Office-Home which you used for the experiments and the list of packages (and their versions) used in the experiment? It would be very helpful. Thank you again.

@tim-learn
Copy link
Owner

Hi @tim-learn , thank you for sending the checkpoints.

I used your source trained checkpoints and simply used them to compute the source-only numbers for ODA. The numbers I get are very poor (almost like random guess). I am using the default run command for ODA. Attaching it below. Screenshot 2021-10-26 at 14 55 34

I am not sure why the numbers are so bad. When I was training my own model, the numbers were better (however not close to what you report but decent numbers). I was wondering if there is some difference in the dataset list.txt or something different in the installed packages.

Is it possible to share the dataset_list.txt for each domain of Office-Home which you used for the experiments and the list of packages (and their versions) used in the experiment? It would be very helpful. Thank you again.

How abut the performance for other settings like closed-set UDA or partial-set UDA in this code? If these results are okay, it seems that both the versions of different library packages and the data list files are okay.

@roysubhankar
Copy link
Author

HI @tim-learn , now the numbers from your paper match when I re-run your code. The problem was in the file list actually. We were using different file lists for open set and thats why the numbers were different. Thanks for you help. Closing the issue.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants