-
Notifications
You must be signed in to change notification settings - Fork 20
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Why this reimplementation use bias==False in all CONV layers? #1
Comments
Hi, thanks for your comment! For the second question, STGAN is exactly based on the work of AttGAN, and in AttGAN they use a mechanism to control the attribute manipulation intensity by make the target vector uniformly lie on [-1, 1] while training. You can read the AttGAN paper and its implementation for more details: |
Wow!! Thank you very much, it helps me A LOT!!!
|
Hi, I noticed that another differences: your version don't use inject layers and just use 3 stu layers. I modify it to use inject layers in decoder and use 4 shortcut layers and found that it is difficult to converge. Did you try this ? If so, could you give me some suggestions on training? following is my parameters: datadataset: celeba modelg_conv_dim: 48 trainingbatch_size: 64 steps:summary_step: 10 |
Hi, bluestyle97!
Thanks for your nice pytorch reimplementation! It more faster than official version. But, i found some difference: 1. conv without bias . 2. target label has a randmom coefficient multiplied. Could you explain that? I am confusing about why you did this. thank you very much!
The text was updated successfully, but these errors were encountered: