Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Questions regarding your implementation #2

Closed
NIRVANALAN opened this issue Feb 14, 2020 · 2 comments
Closed

Questions regarding your implementation #2

NIRVANALAN opened this issue Feb 14, 2020 · 2 comments

Comments

@NIRVANALAN
Copy link

NIRVANALAN commented Feb 14, 2020

Hi, I have read yoru code and have some questions here:

  1. you adopted the self-implemented l2_reg_loss and add it to BP loss. Why don't you use the l2_reg of Adam optimizer by setting weight_decay param?
  2. in your gcn implementation, I noticed
    image
    I wonder why are there two dropout in one layer?
@NIRVANALAN NIRVANALAN changed the title Code regarding L2 Questions regarding your implementation Feb 14, 2020
@shchur
Copy link
Owner

shchur commented Feb 21, 2020

Hi,

  1. I'm not sure if that's the right thing to do, but I remember reading that Pytorch and Tensorflow implement L2 regularization / weight decay differently (https://openreview.net/pdf?id=rk6qdGgCZ). So I decided to use the same version of L2 regularization as in the original Tensorflow implementation for consistency.

  2. That indeed seems to be a mistake, thanks a lot for pointing it out! I will have a look at it on Monday.

@shchur
Copy link
Owner

shchur commented Feb 25, 2020

I have just fixed this issue in 66a015d, everything seems to work fine now. Thanks again for point it out!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants