-
Notifications
You must be signed in to change notification settings - Fork 11
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
(Leaky) ReLu #2
Comments
There is no any particular reason. Overall, the architecture for the encoder and decoder is DCGAN alike (where the encoder is similar to the discriminator and the decoder is similar to the generator). https://arxiv.org/pdf/1511.06434.pdf Obviously, the above discussion is about GANs, but we have a VAE here, so things can be different a lot. But I found out that DCGAN like architecture works pretty well. I do not remember precisely, why it is decoding where leaky relu is used, but I think that I had the reasoning that the decoder will have to adapt more during training due to changes in the latent space. I think that overall, this configuration was working better in terms of the visual quality of the results, but I didn't do any quantitative study. |
Thanks a lot for your insight. I'm not fully convinced that the generator discriminator structure is completely analogous as GANs seem to have a more unstable balance between the two, but I do see the parallels. |
Is there a reason why you use normal relu for encoding, but a leaky relu for decoding?
The text was updated successfully, but these errors were encountered: