You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Could be interesting to try. We went with gradient penalty because it makes theoretically that we want true data to be energy minima, by optimizing for the norm of the derivative to be equal to 0.
yet, I am very appreciated with your interpretation (energy minima) of gradient penalty.
however, gradient penalty is our prior knowledge of this problem. If there is a replaceable approach with less prior knowledge, I think it is also interesting.
So it is indeed interesting to explore other options instead of gradient penalty to fix the temperature explosion.
We couldn't find other ways to prevent it from exploding. You could try running experiments and you'll notice.
There is nothing from the theory of energy-based models that suggests that this might happen. However the GAN theory could give some clues as to why this is possible. [Lipshitz continuity required for the WGAN discriminator].
I agree that prior knowledge of GAN theory helped us understand than gradient penalty does something useful for energy-based model training as well. [Making real data as energy minima]
Good job! But I have some problems:
how if we use hinge loss as EnergyModel loss? Gradient penalty is actually good but we know with gradient penalty is slow.
Can you show more final demo pictures in github? I really want to see the whole results on cifar10 and celeba.
thanks for such a good job.
The text was updated successfully, but these errors were encountered: