You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
It seems that GAN is used to force the global encoding matching the prior distribution. The discriminator loss consists of a difference between E_pos and E_neg (the realization equals to the original gan's formulation, i.e., with log term) and also one gradient penalty term which is introduced in WGAN to satisfy the Lipschitz constraint. So I wonder if the combination of original gan with a gradient penalty is reasonable?
The text was updated successfully, but these errors were encountered:
HeimingX
changed the title
Bug in prior matching part
Questions about prior matching part
Jan 8, 2020
Which original GAN do you speak of? The penalty that I use comes from Kevin Roth's paper, which works much better than WGAN-GP penalty in my experience. In Kevin's paper, he used one of the f-divergence GANs (which includes the original, see Sebastian Nowozin's paper), and as far as I know they should work with any of them. Most of these are implemented in my code.
Hi, thank you for your interesting DIM model and the open-source code.
However, I am confused about the realization of prior matching part
DIM/cortex_DIM/models/discriminator.py
Line 62 in bac4765
It seems that GAN is used to force the global encoding matching the prior distribution. The discriminator loss consists of a difference between E_pos and E_neg (the realization equals to the original gan's formulation, i.e., with log term) and also one gradient penalty term which is introduced in WGAN to satisfy the Lipschitz constraint. So I wonder if the combination of original gan with a gradient penalty is reasonable?
The text was updated successfully, but these errors were encountered: