Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

will it support cuda.jit ? #14

Open
dathudeptrai opened this issue Aug 12, 2020 · 4 comments
Open

will it support cuda.jit ? #14

dathudeptrai opened this issue Aug 12, 2020 · 4 comments

Comments

@dathudeptrai
Copy link

dathudeptrai commented Aug 12, 2020

will this lib support tensorflow tensor work with cuda.jit (from numba).

@jonasrauber
Copy link
Owner

Could you be a bit more specific what you mean?
Support in which way?
How would that make sense in combination with PyTorch, TensorFlow, JAX?

@machineko
Copy link

Using cuda_array_interface torch, cupy and jax are working with numba on cuda.jit nativly u just pass torch.cuda or jax.cuda tensor to them.

TF didn't have that but have https://www.tensorflow.org/api_docs/python/tf/experimental/dlpack/from_dlpack so u can add it pretty easy

@jonasrauber
Copy link
Owner

jonasrauber commented Aug 14, 2020

Oh, now I understand what you mean. Yes, I am aware of dlpack and cuda_array_interface and they are both very interesting. Right now there is no need for them in EagerPy, because they are for exchanging tensors between different frameworks and EagerPy is rather about writing code that works with each framework (but that doesn't combine different frameworks).
If there is a good use case, it might be worth considering how we can add this explicitly. In a way it's already possible to use, because EagerPy tensors are really just wrapped tensors from the individual frameworks.

If your question is specifically about whether you can use EagerPy tensors with numba, then I guess the answer is the same: EagerPy tensors are really just wrapped tensors and you can at any time get the raw tensor and then its cuda_array_interface.

If you are asking for something else, please provide a specific example of code that you would want to be supported.

@machineko
Copy link

machineko commented Aug 14, 2020

Dunno what OP wants from cuda.jit but its pretty neat function to have working python cuda code with gpu arrays, u can convert almost any function write to work on cpu to gpu without any overhead specialy good for some sort of search for big arrays (for example glowtts).
You don't need to add custom interface probably just add dlpack handler to tf tensor will be enough.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants