Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Dimensions do not match in VGRNN #9

Open
AlessandroFazio opened this issue May 29, 2023 · 2 comments
Open

Dimensions do not match in VGRNN #9

AlessandroFazio opened this issue May 29, 2023 · 2 comments

Comments

@AlessandroFazio
Copy link

AlessandroFazio commented May 29, 2023

Hello, @jhljx. First of all I want to thank you for the great work you put in this OS project.

Trying the VGRNN code found an error at line 497 in baseline/VGRNN.py. When concatenating phi_x_t and h[-1] there is a mismatch: phi_x_t is of size (num_nodes x hidden) and h[-1] is size (input_features x hidden_dim), so that joining them on dim=1 results in an error, unless you have input_features == num_nodes, which is the case when you're very lucky or do not have node_features and they're initialized as an Identity of size num_nodes x num_nodes, which is the base case you support.

Have you considered this case and I'm doing something wrong or you didn't provided support for this? Thanks in advance.

@AlessandroFazio
Copy link
Author

AlessandroFazio commented May 30, 2023

Hello @jhljx , looking deeper into the code I think to have found the issue. I have not tried yet with these new changes, but this is likely the source of error.

I looked at the original code in VGRNN repository provided by the authors. In their VGRNN forward method they pass x_in, edge_list and the initial_hidden_state(=None) as you did. However if you look at the prediction.py code they pass the x_list into torch.stack(.), which outputs a 3d tensor of size (T, num_nodes, input_features). Then they initialize h as a 3d tensor of size (rnn_num_layers, x.size(1), hidden_dim). Here's the point: x.size(1) is num_nodes, which is the reason why h[.], where . is a scalar outputs a (num_nodes, hidden_dim) 2d tensor.

However in your code you initialized h as of size (rnn_num_layers, input_features, hidden_dim). When input_features == num_nodes like in the no nodes features mode, everythings works fine and this go unnoticed.

I will let you know if this effectively fix the issue as soon as I can try the new code. Hope to help improve the code.

@jhljx
Copy link
Owner

jhljx commented May 31, 2023

This bug has fixed. Thanks for your feedback.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants