Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Pretraining with same results as in paper #5

Closed
nelson1425 opened this issue Apr 29, 2023 · 2 comments
Closed

Pretraining with same results as in paper #5

nelson1425 opened this issue Apr 29, 2023 · 2 comments

Comments

@nelson1425
Copy link

I have run the pretraining and got the same results as in the paper. You can get the pretrained weights here: https://github.com/nelson1425/EfficientAD

@rximg
Copy link
Owner

rximg commented Apr 29, 2023

Good job

@rximg rximg closed this as completed May 9, 2023
@wangh09
Copy link

wangh09 commented Jun 27, 2023

Hi, I'm reproducing the distillation using my own setup and I'm wondering if the results are correct; the loss I got at the 10000th step for PDN-S is 0.639. Besides, do you know if there is any other way to evaluate the distillation results? Thanks a lot!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants