You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
in the gradient_penalty function in the WGAN class, alpha is sampled from a normal distribution (tf.random.normal) with mean 0.0 and std 1.0.
In the "Improved Training of Wasserstein GANs" paper/code, and all other implementations I have seen, this is sampled from a uniform distribution with min 0.0 and max 1.0.
I cannot find any discussion on sampling this from alternative distributions to what was proposed, but it clearly still works. I just wonder whether anyone can explain the motivation for this deviation from the original model?
The text was updated successfully, but these errors were encountered:
Hi,
in the gradient_penalty function in the WGAN class, alpha is sampled from a normal distribution (tf.random.normal) with mean 0.0 and std 1.0.
In the "Improved Training of Wasserstein GANs" paper/code, and all other implementations I have seen, this is sampled from a uniform distribution with min 0.0 and max 1.0.
I cannot find any discussion on sampling this from alternative distributions to what was proposed, but it clearly still works. I just wonder whether anyone can explain the motivation for this deviation from the original model?
The text was updated successfully, but these errors were encountered: