-
Notifications
You must be signed in to change notification settings - Fork 66
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Reproducibility of results #12
Comments
Thanks for your interest! |
@marco-rudolph But you are positive that the default setup in this repo reaches |
Also, if you were so kind to explain how you processed the Thank you very much in advance anyway. |
Yes. |
The images were just resized to 448x448, 224x224 and 112x112 pixels without any padding or cropping. |
Thank you very much for your feedback! |
Hello!
First of all, thank you very much for publishing your code as well as writing the paper.
I wonder how to robustly reproduce the results stated in the paper. Namely, what setup is used for the
grid
category from theMVTec Anomaly Detection
dataset? I have tried to use the default setup and could not reach the value of0.84
, only0.8
.Also, my training process was quite unstable (maybe due to the aggressive data augmentations by default).
I use compatible with the
requirements.txt
virtual environment which I do not describe for brevity but let me know if it is important.Thank you very much for your answer in advance.
P.S. Whilst a difference by
0.04
may seem insignificant, e.g. it is the difference between your method and ideal AUC. Hence, I am quite interested is the results stated in the paper are really robust :)The text was updated successfully, but these errors were encountered: