An ecosystem of libraries and tools for writing and executing extremely fast GPU code fully in Rust
The Rust CUDA Project is a project aimed at making Rust a tier-1 language for extremely fast GPU computing using the CUDA Toolkit. It provides tools for compiling Rust to extremely fast PTX code as well as libraries for using existing CUDA libraries with it.
Historically, general purpose high performance GPU computing has been done using the CUDA toolkit. The CUDA toolkit primarily provides a way to use Fortran/C/C++ code for GPU computing in tandem with CPU code with a single source. It also provides many libraries, tools, forums, and documentation to supplement the single-source CPU/GPU code.
CUDA is exclusively an NVIDIA-only toolkit. Many tools have been proposed for cross-platform GPU computing such as OpenCL, Vulkan Computing, and HIP. However, CUDA remains the most used toolkit for such tasks by far. This is why it is imperative to make Rust a viable option for use with the CUDA toolkit.
However, CUDA with Rust has been a historically very rocky road. The only viable option until now has been to use the LLVM PTX backend, however, the LLVM PTX backend does not always work and would generate invalid PTX for many common Rust operations, and in recent years it has been shown time and time again that a specialized solution is needed for Rust on the GPU with the advent of projects such as rust-gpu (for Rust -> SPIR-V).
Our hope is that with this project we can push the Rust GPU computing industry forward and make Rust an excellent language
for such tasks. Rust offers plenty of benefits such as
__restrict__ performance benefits for every kernel, An excellent module/crate system,
delimiting of unsafe areas of CPU/GPU code with
unsafe, high level wrappers to low level CUDA libraries, etc.
The scope of the Rust CUDA Project is quite broad, it spans the entirety of the CUDA ecosystem, with libraries and tools to make it usable using Rust. Therefore, the project contains many crates for all corners of the CUDA ecosystem.
The current line-up of libraries is the following:
rustc_codegen_nvvmWhich is a rustc backend that targets NVVM IR (a subset of LLVM IR) for the libnvvm library.
- Generates highly optimized PTX code which can be loaded by the CUDA Driver API to execute on the GPU.
- For the near future it will be CUDA-only, but it may be used to target amdgpu in the future.
cuda_stdfor GPU-side functions and utilities, such as thread index queries, memory allocation, warp intrinsics, etc.
- Not a low level library, provides many utility functions to make it easier to write cleaner and more reliable GPU kernels.
- Closely tied to
rustc_codegen_nvvmwhich exposes GPU features through it internally.
cudnnfor a collection of GPU-accelerated primitives for deep neural networks.
custfor CPU-side CUDA features such as launching GPU kernels, GPU memory allocation, device queries, etc.
- High level with features such as RAII and Rust Results that make it easier and cleaner to manage the interface to the GPU.
- A high level wrapper for the CUDA Driver API, the lower level version of the more common CUDA Runtime API used from C++.
- Provides much more fine grained control over things like kernel concurrency and module loading than the C++ Runtime API.
gpu_randfor GPU-friendly random number generation, currently only implements xoroshiro RNGs from
optixfor CPU-side hardware raytracing and denoising using the CUDA OptiX library.
In addition to many "glue" crates for things such as high level wrappers for certain smaller CUDA libraries.
Other projects related to using Rust on the GPU:
- 2016: glassful Subset of Rust that compiles to GLSL.
- 2017: inspirv-rust Experimental Rust MIR -> SPIR-V Compiler.
- 2018: nvptx Rust to PTX compiler using the
nvptxtarget for rustc (using the LLVM PTX backend).
- 2020: accel Higher level library that relied on the same mechanism that
- 2020: rlsl Experimental Rust -> SPIR-V compiler (predecessor to rust-gpu)
- 2020: rust-gpu Rustc codegen backend to compile Rust to SPIR-V for use in shaders, similar mechanism as our project.
Licensed under either of
- Apache License, Version 2.0, (LICENSE-APACHE or http://www.apache.org/licenses/LICENSE-2.0)
- MIT license (LICENSE-MIT or http://opensource.org/licenses/MIT)
at your discretion.
Unless you explicitly state otherwise, any contribution intentionally submitted for inclusion in the work by you, as defined in the Apache-2.0 license, shall be dual licensed as above, without any additional terms or conditions.