Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

cuSOLVER wrappers #3

Open
seibert opened this issue Jul 27, 2017 · 6 comments
Open

cuSOLVER wrappers #3

seibert opened this issue Jul 27, 2017 · 6 comments

Comments

@seibert
Copy link
Collaborator

seibert commented Jul 27, 2017

It would make sense to add a wrapper around cuSOLVER. Note that the cudatoolkit 7.5 conda package (as of this issue) does not include the library and needs to be updated to include it. The cudatoolkit 8.0 package in the numba channel on Anaconda Cloud does already have cuSOLVER. (conda install -c numba cudatoolkit).

@lebedov
Copy link

lebedov commented Aug 2, 2017

FYI, I have some BSD-licensed CUSOLVER bindings in scikit-cuda that could potentially be modified for this purpose.

@seibert
Copy link
Collaborator Author

seibert commented Aug 2, 2017

Oh hey, I didn't realize you were the scikit-cuda developer! We would be very interesting in figuring out where we could collaborate, given the high overlap between our projects.

To be honest, pyculib exists mainly so that both CPU memory NumPy arrays and GPU memory Numba device arrays can both be passed to these libraries through a single API. One thing on my wishlist is to figure out a simple protocol for Python objects that encapsulate GPU memory ndarrays to interoperate between libraries like Numba, PyCUDA, PyTorch, TensorFlow, etc. That would allow a library wrapper like scikit-cuda or pyculib to exist independent of any of the particular GPU python projects.

@lebedov
Copy link

lebedov commented Aug 2, 2017

Sure - collaboration would be great! We can discuss further on some other channel (email, a separate git issue, etc.) - let me know.

By "protocol", I assume you are suggesting something that would convert between different GPU-based array backends for Python rather than provide a canonical backend?

@seibert
Copy link
Collaborator Author

seibert commented Aug 2, 2017

Yeah, I borrow the term from the Python buffer protocol: https://docs.python.org/3.6/c-api/buffer.html

A GPU version of that would allow any Python package to determine the basic data type, shape, and layout of a GPU device allocation, along with a way to get the device pointer. It's unlikely that all these projects would (or should) agree on a single container object, but they all could easily expose some Python attributes on their own device allocations that would allow other packages to use them.

@alkalinin
Copy link

Hi all, thanks for this discussion. If you need help in porting cuSOLVER to the pyculib, I can help. Now I want to use cuSOLVER in my python application, so I can try to adopt it based on pyculib api.

@seibert
Copy link
Collaborator Author

seibert commented Sep 1, 2017

We'd be happy to review a PR for this, and it looks like you could base it on the code in scikit-cuda.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants