Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[RFC] Memory allocator interoperability with PyTorch #7556

Open
Tracked by #7555
kmaehashi opened this issue May 11, 2023 · 0 comments
Open
Tracked by #7555

[RFC] Memory allocator interoperability with PyTorch #7556

kmaehashi opened this issue May 11, 2023 · 0 comments
Assignees
Labels

Comments

@kmaehashi
Copy link
Member

kmaehashi commented May 11, 2023

Description

It is one of the common use cases for CuPy to be used in conjunction with PyTorch. One of the difficulties when using two CUDA-powered libraries (including but not limited to CuPy/PyTorch combination) is sharing memory pools and streams between them. Currently, utility functions to share CuPy/PyTorch memory pools/streams are provided in pytorch-pfn-extras (pytorch_pfn_extras.cuda.*), however, it makes sense to host them in CuPy if there is a demand.

Specific features in mind are:

This RFC issue intends to gather interest for this feature from the community, and discuss where to put these methods (cupyx.???) once we decide to do that.

Additional Information

No response

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

1 participant