Skip to content

Conversation

vanbasten23
Copy link
Collaborator

@vanbasten23 vanbasten23 commented Jun 6, 2024

Needs pytorch/pytorch#128176

Test plan: PJRT_DEVICE=CUDA python pytorch/xla/test/test_operations.py -k test_dlpack_xla_to_pytorch_cuda_protocol_conversion

@vanbasten23 vanbasten23 changed the title Add a test to convert xla gpu tensor to cuda tensor by using the protocal. Add a test to convert xla gpu tensor to cuda tensor by using the protocol. Jun 6, 2024
@vanbasten23 vanbasten23 force-pushed the xiowei/use_producer_dlpack_device_from_xla_to_cuda branch from 3c50178 to b3e3a1a Compare June 6, 2024 23:24
@vanbasten23 vanbasten23 marked this pull request as ready for review June 6, 2024 23:29
@vanbasten23 vanbasten23 requested review from JackCaoG and ysiraichi June 6, 2024 23:40
@ysiraichi
Copy link
Collaborator

Closing in favor of: #8294

@ysiraichi ysiraichi closed this Oct 21, 2024
pytorchmergebot pushed a commit to pytorch/pytorch that referenced this pull request Dec 5, 2024
Taking over: #128176.

In summary, this PR:

- `__dlpack__`: Calls PyTorch/XLA `to_dlpack` function, if the tensor lives in an XLA:CUDA device
- `__dlpack_device__`: Correctly maps PyTorch/XLA tensors to `kDLGPU`, if XLA:CUDA is being used

The tests are introduced in pytorch/xla#7213.
Pull Request resolved: #138470
Approved by: https://github.com/albanD

Co-authored-by: iefgnoix <isaacwxf23@gmail.com>
AmdSampsa pushed a commit to AmdSampsa/pytorch that referenced this pull request Dec 9, 2024
Taking over: pytorch#128176.

In summary, this PR:

- `__dlpack__`: Calls PyTorch/XLA `to_dlpack` function, if the tensor lives in an XLA:CUDA device
- `__dlpack_device__`: Correctly maps PyTorch/XLA tensors to `kDLGPU`, if XLA:CUDA is being used

The tests are introduced in pytorch/xla#7213.
Pull Request resolved: pytorch#138470
Approved by: https://github.com/albanD

Co-authored-by: iefgnoix <isaacwxf23@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants