You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am trying to train WGAN-GP with a grayscale images dataset of about 13000 samples. I am using ResNet architecture to generate 64x64 images based on the implementation here (https://github.com/jalola/improved-wgan-pytorch) I faced the problem of the sharp dropping of the discriminator loss. I tried to solve that by:
reduce lr
randomly switch between real and fack images
apply L2 regularization (for both G & D models)
apply many types of data Augmentation But still stuck with the problem. figures show the D (multiply by -1) and G losses of 10K iterations.
Can I have an idea of the reason for that sharp drop?
The text was updated successfully, but these errors were encountered:
I am trying to train WGAN-GP with a grayscale images dataset of about 13000 samples. I am using ResNet architecture to generate 64x64 images based on the implementation here (https://github.com/jalola/improved-wgan-pytorch) I faced the problem of the sharp dropping of the discriminator loss. I tried to solve that by:
reduce lr
randomly switch between real and fack images
apply L2 regularization (for both G & D models)
apply many types of data Augmentation But still stuck with the problem. figures show the D (multiply by -1) and G losses of 10K iterations.
Can I have an idea of the reason for that sharp drop?
The text was updated successfully, but these errors were encountered: