Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Training MVTec Wood datasets & Different inference results #42

Closed
jmlee99o opened this issue Aug 1, 2024 · 3 comments
Closed

Training MVTec Wood datasets & Different inference results #42

jmlee99o opened this issue Aug 1, 2024 · 3 comments

Comments

@jmlee99o
Copy link

jmlee99o commented Aug 1, 2024

Hi, I am graduate student in korea.
I want to generate various defective industrial data images by utilizing your model,
so before handling our datas, I use MVTec Wood datasets (Defect : Scratch) to confirm and check the model's performances.

But, when I train model and inference with wood datasets, Its result is quite different with your paper's result.
Both Good and Mask images have not good quality than I expected. (Furthermore, It occurs Overfitting after 400 kimg in good images / Generated Defect images are not similar with MVTec dataset's original scratch things)

I want to know how to optimize the parameters ( that you used ) to develop image qualities.

  1. What parameters did you handle? (It may be a little rude, but I'm most curious about it.)

  2. Did you take other Augmentation tech except for ADA of StyleGAN2 - ADA ?

  3. This is different from the point, when I generate a defective image in my dataset, in that model, both normal and defect images are generated and combined from Latent vector, but can I modify the model by inputting it into the normal image I have without generating the generated image and combining it with the generated defective image? (just generating defect images from latent vectors, but not for normal images. because, I think generated normal images of my industrial dataset will have poor quaility.)

Thanks for reading!
I appreciate for your model services.

@Ldhlwh
Copy link
Owner

Ldhlwh commented Aug 5, 2024

  1. What parameters did you handle? (It may be a little rude, but I'm most curious about it.)

If you mean hyperparameters, we have presented all our choices in the codes and the paper. If you mean model parameters, you may have a look at the codes below.

for name, module, opt_kwargs, reg_interval in training_nets:
num_param = 0
if name == 'D':
if G_kwargs.transfer == 'res_block_uni_dis':
for res in [4, 8, 16, 32, 64, 128, 256]:
target_param = []
if res > D_kwargs.uni_st:
num_param += sum([p.numel() for p in getattr(module, f'mask_b{res}').parameters()])
target_param.append(getattr(module, f'mask_b{res}').parameters())
else:
num_param += sum([p.numel() for p in getattr(module, f'uni_b{res}').parameters()])
target_param.append(getattr(module, f'uni_b{res}').parameters())
target_param = itertools.chain(*target_param)
else:
num_param = sum([p.numel() for p in module.parameters()])
target_param = module.parameters()
elif name == 'D_match':
num_param = sum([p.numel() for p in module.parameters()])
target_param = module.parameters()
elif name == 'G':
if ft == 'default':
num_param = sum([p.numel() for p in module.parameters()])
target_param = module.parameters()
elif ft == 'ft_map':
num_param = sum([p.numel() for p in module.mapping.parameters()])
target_param = module.mapping.parameters()
elif ft == 'ft_syn':
num_param = sum([p.numel() for p in module.synthesis.parameters()])
target_param = module.synthesis.parameters()
elif ft.startswith('ft_syn_'):
num_trainable_block = int(ft.split('_')[-1])
syn_modules = [module.synthesis.b4, module.synthesis.b8, module.synthesis.b16, module.synthesis.b32, module.synthesis.b64, module.synthesis.b128, module.synthesis.b256]
target_param = itertools.chain(*[mod.parameters() for mod in syn_modules[:num_trainable_block]])
num_param = sum([p.numel() for p in target_param])
target_param = itertools.chain(*[mod.parameters() for mod in syn_modules[:num_trainable_block]])
elif ft.startswith('ft_map_syn_'):
num_trainable_block = int(ft.split('_')[-1])
syn_modules = [module.synthesis.b4, module.synthesis.b8, module.synthesis.b16, module.synthesis.b32, module.synthesis.b64, module.synthesis.b128, module.synthesis.b256]
target_param = itertools.chain(*[mod.parameters() for mod in syn_modules[:num_trainable_block]], module.mapping.parameters())
num_param = sum([p.numel() for p in target_param])
target_param = itertools.chain(*[mod.parameters() for mod in syn_modules[:num_trainable_block]], module.mapping.parameters())
elif ft == 'transfer':
if G_kwargs.transfer == 'dual_mod':
target_param = module.defect_mapping.parameters()
num_param = sum([p.numel() for p in target_param])
target_param = module.defect_mapping.parameters()
elif G_kwargs.transfer in ['res_block', 'res_block_match_dis', 'res_block_uni_dis']:
target_param = [module.defect_mapping.parameters()]
num_param += sum([p.numel() for p in module.defect_mapping.parameters()])
for res in [4, 8, 16, 32, 64, 128, 256]:
if res >= G_kwargs.synthesis_kwargs.res_st:
target_param.append(getattr(module.synthesis, f'res_b{res}').parameters())
num_param += sum([p.numel() for p in getattr(module.synthesis, f'res_b{res}').parameters()])
target_param = itertools.chain(*target_param)

2. Did you take other Augmentation tech except for ADA of StyleGAN2 - ADA ?

No, just ADA.

3. This is different from the point, when I generate a defective image in my dataset, in that model, both normal and defect images are generated and combined from Latent vector, but can I modify the model by inputting it into the normal image I have without generating the generated image and combining it with the generated defective image? (just generating defect images from latent vectors, but not for normal images. because, I think generated normal images of my industrial dataset will have poor quaility.)

If you believe you are not able to get a good base model generating defect-free images in the first stage, then our DFMGAN may not meet your requirement since it depends on well-learned object/texture features. You may consider anomaly generation methods of image-to-image pipeline like Defect-GAN.

@lu1zero9
Copy link

lu1zero9 commented Aug 6, 2024

Hi, I am graduate student in korea. I want to generate various defective industrial data images by utilizing your model, so before handling our datas, I use MVTec Wood datasets (Defect : Scratch) to confirm and check the model's performances.

But, when I train model and inference with wood datasets, Its result is quite different with your paper's result. Both Good and Mask images have not good quality than I expected. (Furthermore, It occurs Overfitting after 400 kimg in good images / Generated Defect images are not similar with MVTec dataset's original scratch things)

I want to know how to optimize the parameters ( that you used ) to develop image qualities.

  1. What parameters did you handle? (It may be a little rude, but I'm most curious about it.)
  2. Did you take other Augmentation tech except for ADA of StyleGAN2 - ADA ?
  3. This is different from the point, when I generate a defective image in my dataset, in that model, both normal and defect images are generated and combined from Latent vector, but can I modify the model by inputting it into the normal image I have without generating the generated image and combining it with the generated defective image? (just generating defect images from latent vectors, but not for normal images. because, I think generated normal images of my industrial dataset will have poor quaility.)

Thanks for reading! I appreciate for your model services.

Hi, have you generated all the results to check the effects? If so, could you please send me a copy of the generated results? I would like to take a look. Thank you very much, I appreciate it!

@Ldhlwh
Copy link
Owner

Ldhlwh commented Aug 28, 2024

I'll close this issue for now since it's been inactive for weeks.

@Ldhlwh Ldhlwh closed this as completed Aug 28, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants