Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Are we CUDA yet? #16

Open
denzp opened this issue Jun 5, 2019 · 10 comments
Open

Are we CUDA yet? #16

denzp opened this issue Jun 5, 2019 · 10 comments

Comments

@denzp
Copy link
Member

denzp commented Jun 5, 2019

What do you think about creating a simple website with similar to Are we async yet?, Are we learning yet? and many others?

It could serve as an overview of our progress on the roadmap, and a central place to store links to blogs and announcements.

@bheisler
Copy link

bheisler commented Jun 6, 2019

Well... for what it's worth, I registered arewegpgpuyet.com a while back. Meant to set up a site behind it but never did.

@gnzlbg
Copy link
Contributor

gnzlbg commented Jun 6, 2019

Sounds good!

@denzp
Copy link
Member Author

denzp commented Jun 6, 2019

@bheisler if you don't mind, that would be very helpful!

We could keep it as overall GPGPU overview, and mention that CUDA currently has best (IIRC) Rust support so far and the only (once to become official) WG.

It also helps us to ensure the website is future-proof: information about other GPGPU platforms can be added later when it will be ready.

@bheisler
Copy link

bheisler commented Jun 8, 2019

I'd be happy to point the DNS records for arewegpgpuyet.com at a server, but I doubt I'll get a chance to generate a website or set up a server to serve it any time soon.

@grovesNL
Copy link

grovesNL commented Jun 8, 2019

FWIW if anyone's interested, there are some parallel efforts in the Rust graphics/game dev ecosystem for GPGPU through Vulkan/Metal/DirectX compute shaders. Projects like gfx-hal abstract over all of them to accept only SPIR-V, and there are other projects to compile OpenCL kernels (not a Rust project) and Rust to SPIR-V.

I'm not sure if this is useful or if there are opportunities for collaboration anywhere, but perhaps some of this would at least fit into some kind of category on a page like arewegpgpuyet :)

@gnzlbg
Copy link
Contributor

gnzlbg commented Jun 8, 2019

+1. An arewegpgpuyet website should be more general than just CUDA.

@omac777
Copy link

omac777 commented Oct 22, 2019

I enjoyed reading your blog about rust cuda interoperability issues:
https://bheisler.github.io/post/state-of-gpgpu-in-rust/

I totally agree with everything you mentioned.
I think c++'s thrust and arrayfire-rust are a very useful set of apis.
Apis that are lacking:

  1. string indexing and searching(targetting text-based files)
  2. byte array indexing and searching(targetting binary-based files)
  3. working with prep work from 1., have search and replace within text files and/or memory.
  4. working with prep work from 2., have search and replace within binary files and/or memory.

When I was looking at vulkano(rust binding for vulkan), I was surprised to not find any gpgpu string processing functions. Ditto for opencl. The cuda rapids nvstrings/nvcategory apis are the closest to this but still very low-level and less featureful than I would have expected considering the number of generations of gpus we have had with coined gpgpu terminology in existence for at least 10 years.

I recently found an article GPUfs

https://www.cs.utexas.edu/users/witchel/pubs/silberstein13asplos-gpufs.pdf
https://github.com/gpufs/gpufs

which uses an interesting kernel api call wrapping mechanism to make it transparent to use in certain ways, but I was startled that it didn't conform to the usual fuse module where you mount something and you can interact with it as if it were a filesystem. That would have been very interesting as a method to communicate with the GPU...as a file system and as a parallel file system. You copy data into certain directories representing the different streams or vectors, then possibly copy a compute kernel into a particular directory, then copy a launch request file in another directory mapping to a particular vector. It could be activated via a touch command for example. This is all a brain fart perhaps, but I preferred to voice the concept to see what all of you were envisioning as a way to interoperate with these capabilities.

@lahwran
Copy link

lahwran commented Mar 7, 2020

so among all projects so far, is there anything on a path towards being able to write rust code and be competitive with hand-optimized cuda c++, for an algorithm that depends on device intrinsics and shared memory and such? I'm concerned that the multi step compilation through spir-v may not expose the full capability of the CUDA toolkit, and I'm wondering what would need to be done to get a PTX target working again. it also would be sweet if the same rust code that compiles to spir-v could compile directly to cuda, so that you can feature gate calling the cuda-specific apis.

@saona-raimundo
Copy link

Hi! Just wondering around the ever lasting topic of gpgpu in Rust.

and I'm wondering what would need to be done to get a PTX target working again.

Is it not that Rust PTX Linker solves this problem? I mean, generating .ptx files from Rust code (although Windows users still need to wait for now).

I understand that once you have the .ptx file, you can go ahead and do the integration with RustaCUDA, and that this is the "safest" and "default" way to use cuda in Rust, am I wrong?

Cheers!

@lahwran
Copy link

lahwran commented Jan 13, 2021

i think the current status is that these two projects need work, but i've only had time to shop, not set up and try out:

https://github.com/denzp/rust-ptx-builder
https://github.com/termoshtt/accel

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

7 participants