-
Notifications
You must be signed in to change notification settings - Fork 29
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Some questions about the traininig time and the finally results #4
Comments
Training the model on 15 categories of MVTec for 2 days is realistic since diffusion models are expensive to train. |
I also met this question about the results of carprt, there is a huge gap between the report results and the one I reproduced, I try epochs 1000, 1500, 2000 , AD epoch 1,2 3 and many hyperparameter |
For carpet w = 0 and DA_chp =0 would be the best. However, I will publish checkpoints and settings very soon. |
There is likely no AD_chp in config.yaml, you mean DA_epochs or others? |
The DA_epochs indicates the number of iterations for fine-tuning the feature extractor. And DA_chp indicates the checkpoint you load the checkpoint. For carpet setting them to zero would results in the best. |
AD_chp=0 means I onlt need to train one iteration to fine-tuning the feature extractor for carpet? |
It means a pretrained feature extractor outperforms a fine-tuned one. |
No fine tuning? directly use pretrained feature extractor? |
Exactly. |
I directly use pretrained feature extractor and set w=0, I reproduced the best results is: |
checkpoints are published |
First thank you for your excellent work, while I met some questions when reproducing the results:
AUROC: 0.6737560033798218
AUROC pixel level: 0.8384530544281006
threshold: 0.59561723
I kept almost all of the parameters unchanged in the config.yaml except for the batchsize, I wonder whether the setting is the same as the one you used and if not , how can I make the proper change to get the similar results.
The text was updated successfully, but these errors were encountered: