Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add support for heterogeneous workloads in multi XPU benchmarks #8382

Merged

Conversation

DamianSzwichtenberg
Copy link
Member

@DamianSzwichtenberg DamianSzwichtenberg commented Nov 15, 2023

Fix for the following error:

RuntimeError: Modules with uninitialized parameters can't be used with DistributedDataParallel. Run a dummy forward pass to correctly initialize the modules.

HeteroGAT requires additional treatment, as the lin_dst from each of its convolutions is not initialized for paper-cites-paper relation (for more details, look at this code).

Copy link
Contributor

@JakubPietrakIntel JakubPietrakIntel left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The fix seems valid, but I wonder if there's a way to include this on the GNN-layer level.
@rusty1s don't you think that we should still initialize
x_dst = self.lin_dst(x_dst).view(-1, H, C)
L235C13-L235C59 when there's a self-loop on x_src ?

@rusty1s
Copy link
Member

rusty1s commented Nov 17, 2023

Discussed with @DamianSzwichtenberg. We should update GATConv here, yes.

@DamianSzwichtenberg
Copy link
Member Author

Discussed with @DamianSzwichtenberg. We should update GATConv here, yes.

A possible solution is available in #8397.

rusty1s added a commit that referenced this pull request Nov 19, 2023
This PR fixes the error described in #8382.

---------

Co-authored-by: rusty1s <matthias.fey@tu-dortmund.de>
@DamianSzwichtenberg DamianSzwichtenberg merged commit f7dd449 into pyg-team:master Nov 20, 2023
14 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants