Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

(Leaky) ReLu #2

Closed
botkevin opened this issue Dec 2, 2020 · 2 comments
Closed

(Leaky) ReLu #2

botkevin opened this issue Dec 2, 2020 · 2 comments

Comments

@botkevin
Copy link

botkevin commented Dec 2, 2020

Is there a reason why you use normal relu for encoding, but a leaky relu for decoding?

@podgorskiy
Copy link
Owner

There is no any particular reason.

Overall, the architecture for the encoder and decoder is DCGAN alike (where the encoder is similar to the discriminator and the decoder is similar to the generator). https://arxiv.org/pdf/1511.06434.pdf
In DCGAN they claimed that RELU in generator and Leaky RELU in discriminator works better.
It is commonly believed that it is due to the fact that the discriminator has to adapt and change a lot during training. If the network has to change a lot, leaky RELU is better, since it allows the dead neurons to recover.

Obviously, the above discussion is about GANs, but we have a VAE here, so things can be different a lot. But I found out that DCGAN like architecture works pretty well.

I do not remember precisely, why it is decoding where leaky relu is used, but I think that I had the reasoning that the decoder will have to adapt more during training due to changes in the latent space. I think that overall, this configuration was working better in terms of the visual quality of the results, but I didn't do any quantitative study.

@botkevin
Copy link
Author

botkevin commented Dec 4, 2020

Thanks a lot for your insight. I'm not fully convinced that the generator discriminator structure is completely analogous as GANs seem to have a more unstable balance between the two, but I do see the parallels.
I don't want to bother you too much, and this is somewhat irrelevant, but have you seen https://www.microsoft.com/en-us/research/blog/less-pain-more-gain-a-simple-method-for-vae-training-with-less-of-that-kl-vanishing-agony/ or the such before? It is essentially a scheduled kl-weight. They apply VAEs for NLP, but I am wondering if the benefits they claim would also be applicable for vision.

@botkevin botkevin closed this as completed Dec 6, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants