Skip to content
This repository has been archived by the owner on Oct 31, 2023. It is now read-only.

Unable to reproduce results when training densenet from scratch #13

Open
praveen5733 opened this issue Nov 4, 2020 · 6 comments
Open

Comments

@praveen5733
Copy link

I am able to reproduce the results reported in the paper when I use the pretrained models provided in the repo.
But when I train a densenet from scratch the results are poorer compared to the report. Did anyone face a similar problem?

@tangbohu
Copy link

tangbohu commented Nov 7, 2020

Also do I. Have you solved it?

@YixuanLi
Copy link
Contributor

YixuanLi commented Nov 7, 2020

We are unable to update the github repo at this moment. However, we have recently built another repo which provides ODIN as well as many other OOD detection methods. Can you try this: https://github.com/jfc43/informative-outlier-mining?

@huberl
Copy link

huberl commented Jan 7, 2021

I face the same problem @praveen5733 @tangbohu

@YixuanLi
Copy link
Contributor

YixuanLi commented Jan 7, 2021 via email

@Jiejiegary
Copy link

Hi, all, I have the similar problem. I trained densenet (and wideresnet) on cifar10 where models have normal test accuracy. When I test the model with odin in this task, I saw a pretty huge gap between the results and the reported ones. Maybe I miss something here.

@YixuanLi
Copy link
Contributor

YixuanLi commented Apr 26, 2021

For wideresnet, you can refer to our latest paper: https://github.com/wetliu/energy_ood. It's also recommended to use energy score as it's parameter-free and gives a performance that's comparable or better than ODIN.

For ODIN, you can typically get a ballpark performance estimation by setting the temperature to be T=1000.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants