You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thank you for sharing your work. I have noticed a mis-match between the paper and the repo:
1-in the paper, you mention using RELU, while in the code you're using LeakyRELU.
2-in the paper, you mention using SGD, while in the code you're using ADAM.
3-in the paper, you mention using a decaying learning rate (by a factor of 2.5% for each epoch), while in the code you're using a static learning rate.
Could you kindly elaborate on which settings to use?
Many thanks :)
The text was updated successfully, but these errors were encountered:
@torki-hossein Yes, I am actually trying to reproduce this work for comparative purposes.
I'm not the author of this work, and I am also trying to reproduce this work for comparative and extending purposes. please answer my mail (in your mailbox). ;)
Hello,
Thank you for sharing your work. I have noticed a mis-match between the paper and the repo:
1-in the paper, you mention using RELU, while in the code you're using LeakyRELU.
2-in the paper, you mention using SGD, while in the code you're using ADAM.
3-in the paper, you mention using a decaying learning rate (by a factor of 2.5% for each epoch), while in the code you're using a static learning rate.
Could you kindly elaborate on which settings to use?
Many thanks :)
The text was updated successfully, but these errors were encountered: