Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We鈥檒l occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bug in PNAConv implementation #5514

Closed
adrianjav opened this issue Sep 23, 2022 · 1 comment 路 Fixed by #5515
Closed

Bug in PNAConv implementation #5514

adrianjav opened this issue Sep 23, 2022 · 1 comment 路 Fixed by #5515
Labels

Comments

@adrianjav
Copy link
Contributor

馃悰 Describe the bug

When aggregating messages in PNAConv, scalers are applied one after the other and then stacked, but the variable used on the for loop is the same one always. That is, scalers are applied to the same variable one after the other, instead of applying each scaler independently to the same base variable.

This is the latest code in the repository:

out = self.aggr(x, index, ptr, dim_size, dim)
assert index is not None
deg = degree(index, num_nodes=dim_size, dtype=out.dtype).clamp_(1)
size = [1] * len(out.size())
size[dim] = -1
deg = deg.view(size)
outs = []
for scaler in self.scaler:
if scaler == 'identity':
pass
elif scaler == 'amplification':
out = out * (torch.log(deg + 1) / self.avg_deg['log'])
elif scaler == 'attenuation':
out = out * (self.avg_deg['log'] / torch.log(deg + 1))
elif scaler == 'linear':
out = out * (deg / self.avg_deg['lin'])
elif scaler == 'inverse_linear':
out = out * (self.avg_deg['lin'] / deg)
else:
raise ValueError(f"Unknown scaler '{scaler}'")
outs.append(out)

and the variable out is reused over the entire loop.

Proposed solution

The solution is as simple as using a different variable name within the for loop and do, e.g., out_i = out at the start of the loop in line 77. Since it is such a simple change, I am happy to open a pull request if requested.

I might be wrong here, so please correct me if that's the case, but looking at the original paper and the official implementation, I think this is a bug:
https://github.com/lukecavabarrett/pna/blob/0c630c2e2d88bb1ef784c850dd8f3a069fcd9489/models/pytorch_geometric/pna.py#L158

Environment

  • PyG version:
  • PyTorch version:
  • OS:
  • Python version:
  • CUDA/cuDNN version:
  • How you installed PyTorch and PyG (conda, pip, source):
  • Any other relevant information (e.g., version of torch-scatter):
@adrianjav adrianjav added the bug label Sep 23, 2022
@EdisonLeeeee
Copy link
Contributor

Thanks for reporting. You are right. This is not expected behavior in PNAConv. Please feel free to send a PR for this :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants