Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Questions about prior matching part #36

Open
HeimingX opened this issue Jan 7, 2020 · 1 comment
Open

Questions about prior matching part #36

HeimingX opened this issue Jan 7, 2020 · 1 comment

Comments

@HeimingX
Copy link

HeimingX commented Jan 7, 2020

Hi, thank you for your interesting DIM model and the open-source code.

However, I am confused about the realization of prior matching part

self.add_losses(discriminator=-difference + gp_loss)

It seems that GAN is used to force the global encoding matching the prior distribution. The discriminator loss consists of a difference between E_pos and E_neg (the realization equals to the original gan's formulation, i.e., with log term) and also one gradient penalty term which is introduced in WGAN to satisfy the Lipschitz constraint. So I wonder if the combination of original gan with a gradient penalty is reasonable?

@HeimingX HeimingX changed the title Bug in prior matching part Questions about prior matching part Jan 8, 2020
@rdevon
Copy link
Owner

rdevon commented May 14, 2020

Which original GAN do you speak of? The penalty that I use comes from Kevin Roth's paper, which works much better than WGAN-GP penalty in my experience. In Kevin's paper, he used one of the f-divergence GANs (which includes the original, see Sebastian Nowozin's paper), and as far as I know they should work with any of them. Most of these are implemented in my code.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants