You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The RandomLinkSplit class is negative sampling with the device being the cpu. If the original data was on a cuda device, we would run into the error "Expected all tensors to be on the same device" when we operate on the two tensors (eg. torch.cat).
Thanks for opening the issue.
As you have experienced, tensors must reside on the same device before performing certain operations, so the first solution, at least for me, is the one that feels more natural. Furthermore, allowing the user to specify a (possibly wrong) device has no real benefit, and may as well introduce bugs if the user inadvertently inputs the wrong device.
馃悰 Bug
The RandomLinkSplit class is negative sampling with the device being the
cpu
. If the original data was on acuda
device, we would run into the error "Expected all tensors to be on the same device" when we operate on the two tensors (eg.torch.cat
).The culprit can be found around this line.
To Reproduce
Example pulled from https://github.com/pyg-team/pytorch_geometric/blob/master/examples/link_pred.py
Expected behavior
I expect the sampled edges to be on the same device as the original sampled tensor.
Environment
torch_geometric.__version__
): 2.0.2torch.__version__
): 1.9.0+cu1113.9
): 3.9conda
,pip
, source): piptorch-scatter
):Additional context
We can see from this line that the negative sampling method does not take a device parameter.
I originally opened a StackOverflow post and a user had suggested I raise this officially. I then took a closer look and it is indeed a bug.
Further inspecting how
negative_sampling
works, I have confirmed this buggy behavior of having the negative samples be on thecpu
device.outputs
The text was updated successfully, but these errors were encountered: