You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello,
In Generator, I am trying to understand why do we need parametric ReLU functions after each pixelshuffler layer? And what is the use of final Conv layer in generator? Can you please explain?
The text was updated successfully, but these errors were encountered:
This repo is just for re-implementation (since there's no official release from the author). I need to follow the setting from the paper.
For PReLU, I think the reason is just it's a more powerful activation function. I don't think you will get much different results by replacing it with ReLU or LeakyReLU (but LeakyReLU is more suitable for network with batchnorm).
For last convolution, I think it's just for refinement. You can remove it and you will find that you make the optimization process more difficult...
Hello,
In Generator, I am trying to understand why do we need parametric ReLU functions after each pixelshuffler layer? And what is the use of final Conv layer in generator? Can you please explain?
The text was updated successfully, but these errors were encountered: