Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

missing numpy cuda math functions, e.g. matmul #3409

Open
cristipurdel opened this issue Oct 14, 2018 · 7 comments
Open

missing numpy cuda math functions, e.g. matmul #3409

cristipurdel opened this issue Oct 14, 2018 · 7 comments
Labels
CUDA CUDA related issue/PR feature_request

Comments

@cristipurdel
Copy link

cristipurdel commented Oct 14, 2018

I have a kernel runing with guvectorize on 'cpu' and 'cuda'
Since there is no matmul support, I have to implement this outside of the kernel as a jitted function decorated depending if it is 'cpu' or 'cuda' used.
This swould be a relative "easy" way to generate functions like matmul, transpose. I do not how to add patches, but I could write the functions example.
Not sure if this is related to Docs 6 & 8 but it wold be usefull to have those functions and additional ones when using 'cuda' and basically extend http://numba.pydata.org/numba-doc/latest/cuda/cudapysupported.html#built-in-functions

@stuartarchibald
Copy link
Contributor

Thanks for the request. The following is under development and may help?

  1. There is no e.g. matmul support for CUDA directly from Numba.
  2. The reason for 1. is in part that core linear algebra functionality really needs to come from bindings to e.g. the cuBLAS library so that performance optimal BLAS kernels are doing the necessary work.
  3. Numba supports the __cuda_array_interface__ specification, which means it can interoperate with CuPy.
  4. CuPy provides np.dot along with many more NumPy like functions.

Failing this, pyculib provides a cuBLAS wrapper.

@stuartarchibald stuartarchibald added feature_request CUDA CUDA related issue/PR labels Oct 15, 2018
@cristipurdel
Copy link
Author

CuPy from what I saw is more like C, and I would prefer to use python :)
pyculib has not seen too much activity in the last year or so
I will use my approach with jitted function and I'll wait for the feature to be added

@stuartarchibald
Copy link
Contributor

Just to check we are talking about the same thing. When I say CuPy I mean https://github.com/cupy/cupy

CuPy : NumPy-like API accelerated with CUDA

It is a Python package that has a NumPy like API and targets CUDA GPUs. No C or C-like behaviour involved unless you want to write custom kernels/do lower level things. As you asked for NumPy like functions, they support a lot https://docs-cupy.chainer.org/en/stable/overview.html, which was my reason for suggesting it. The specific matrix product related functions are here https://docs-cupy.chainer.org/en/stable/reference/linalg.html#matrix-and-vector-products.

Hope this helps?

@cristipurdel
Copy link
Author

CuPy looks definitely interesting, but it would be nice if numba can integrate natively some of those functions
I would prefer to leave this issue open as a feature request.

@cristipurdel
Copy link
Author

CUPy is definitely interesting, especially for numba interoperability:
cupy/cupy#1760
Also I made some test runs using cupy & tensorflow vs numpy
https://github.com/cristipurdel/python/blob/master/linalg_solve_np_cp_tf.py

@cristipurdel
Copy link
Author

Any chance of making a wrapper for at least the BLAS functions since there is a certain degree of interoperability with cupy?

@cristipurdel
Copy link
Author

Would it be possible to inline cupy/numpy functions in the guvectorize decorator?
I tried it but it gives errors.
If this would work, it would make guvectorize behave more like openacc/openmp :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CUDA CUDA related issue/PR feature_request
Projects
None yet
Development

No branches or pull requests

2 participants