-
-
Notifications
You must be signed in to change notification settings - Fork 789
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Interoperability with PyTorch memory pool #2762
Comments
PyTorch is a higher-level library. |
Yes, I agree with that. That's another option.
What we need to do is to provide a feature user needs when they want. Actual user needs sometimes precede the smart or clean design. And I think there is an immediate need for this feature. |
We don't have to spoil modularity in order to fulfill the immediate necessity; that can be done by a separate library. |
I agree that it is another option to consider 👍 |
Is #2710 relevant? I know some core functionalities of PyTorch are implemented in C++, but I am not sure about their memory pool. |
Is #2710 about exposing memory pool in Cython for use within CuPy, or exposing it in C/C++ for use in external apps? I think the former is already accomplished in the current CuPy code (and irrelevant to PyTorch interoperability). As for the latter, I'm not familiar with this kind of usage of Cython so I might be misunderstanding, but I think it's not easy to use for users. |
Sorry for my late reply.
#2710 is raised for this use case. In
That's right. This is meant for developers building projects on top of (or dependent on) CuPy, not for general users. For the case of PyTorch, a lot of nasty things are done at the C++ level, so I think this could be useful. |
I think we can't define which of PyTorch/CuPy is higher as they don't depend on each other. They're both tensor libraries. Ideas:
Note that the idea 1 to 3 is to use PyTorch allocator in CuPy, whereas idea 4 is to use CuPy allocator in PyTorch. |
I agree in that CuPy and PyTorch should not depend on each other. Another library (option 3.) is not necessary either. My suggestion is to make PyTorch expose its bare allocators and wrap them with |
BTW, I found this in pytorch code when looking in-depth https://github.com/pytorch/pytorch/blob/master/aten/src/THC/THCThrustAllocator.cuh This is to allow thrust to get memory from pytorch. |
Would this issue be closed by #3126? |
Closing as we implemented this in a separate library. |
Both CuPy and PyTorch have its memory pool. For interoperability, it would be better if the memory pool can be shared.
Currently we provide a way to use PyTorch memory pool as a CuPy memory pool. One idea is to move this code to CuPy code base (maybe under
cupyx
). https://github.com/chainer/chainer-pytorch-migration/blob/master/chainer_pytorch_migration/allocator.pyThe text was updated successfully, but these errors were encountered: