Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Question about the step to update the GNN and the Graph Editor #1

Closed
jumxglhf opened this issue Jan 25, 2023 · 2 comments
Closed

Question about the step to update the GNN and the Graph Editor #1

jumxglhf opened this issue Jan 25, 2023 · 2 comments

Comments

@jumxglhf
Copy link

Dear EERM authors,

Thanks for sharing the code of this project. I really enjoy reading the paper!

There might be one spot that I misunderstand in your implementation and it would be really helpful if you could explain this line of code.

if m == 0:

My question lies in this line: shouldn't this line be ""m == args.T - 1"", as indicated in the Algorithm 1 in the paper.

I think the current implementation will not utilize the learned multiple environments (i.e., the parameter with shape K x N xN in the graph editor class) at all as the GNN will be updated before the graph editor does. And every time a new epoch starts the graph editors get reset.

Thanks in advance!

@qitianwu
Copy link
Owner

qitianwu commented Feb 1, 2023

Hi Mingxuan,

Thank you for carefully checking the technical details. After double check, I think you are correct and the mentioned line should be "m == args.T - 1" to resolve the inconsistency for T>1.

In our experiments, we found that the model is kind of sensitive to different setups for the graph editors which highly depends on the experimental datasets (given the diversity of our used datasets). Therefore, the value of T and whether to use reset for graph editors are differently used among different datasets. In some cases of our experiments (e.g., arxiv and cora), indeed, the graph editors were not optimized and can still yield competitive performance. The reason could be that the graph editors with random initialization can still augment the input data that explores the contexts and enable effective learning of the outer optimization (the main objective for ood generalization).

@jumxglhf
Copy link
Author

jumxglhf commented Feb 1, 2023

Yes it looks like graph editors with random parameters still work. Thanks for the clarification!

@jumxglhf jumxglhf closed this as completed Feb 1, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants