-
Notifications
You must be signed in to change notification settings - Fork 12
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Training MVTec Wood datasets & Different inference results #42
Comments
If you mean hyperparameters, we have presented all our choices in the codes and the paper. If you mean model parameters, you may have a look at the codes below. DFMGAN/training/training_loop.py Lines 234 to 290 in abe6d64
No, just ADA.
If you believe you are not able to get a good base model generating defect-free images in the first stage, then our DFMGAN may not meet your requirement since it depends on well-learned object/texture features. You may consider anomaly generation methods of image-to-image pipeline like Defect-GAN. |
Hi, have you generated all the results to check the effects? If so, could you please send me a copy of the generated results? I would like to take a look. Thank you very much, I appreciate it! |
I'll close this issue for now since it's been inactive for weeks. |
Hi, I am graduate student in korea.
I want to generate various defective industrial data images by utilizing your model,
so before handling our datas, I use MVTec Wood datasets (Defect : Scratch) to confirm and check the model's performances.
But, when I train model and inference with wood datasets, Its result is quite different with your paper's result.
Both Good and Mask images have not good quality than I expected. (Furthermore, It occurs Overfitting after 400 kimg in good images / Generated Defect images are not similar with MVTec dataset's original scratch things)
I want to know how to optimize the parameters ( that you used ) to develop image qualities.
What parameters did you handle? (It may be a little rude, but I'm most curious about it.)
Did you take other Augmentation tech except for ADA of StyleGAN2 - ADA ?
This is different from the point, when I generate a defective image in my dataset, in that model, both normal and defect images are generated and combined from Latent vector, but can I modify the model by inputting it into the normal image I have without generating the generated image and combining it with the generated defective image? (just generating defect images from latent vectors, but not for normal images. because, I think generated normal images of my industrial dataset will have poor quaility.)
Thanks for reading!
I appreciate for your model services.
The text was updated successfully, but these errors were encountered: