🎉 Modern CUDA Learn Notes with PyTorch for Beginners: fp32/tf32, fp16/bf16, fp8/int8, Tensor/CUDA Cores, flash_attn, rope, embedding, sgemm, sgemv, hgemm, hgemv, warp/block reduce, dot prod, elementwise, sigmoid, relu, gelu, softmax, layernorm, rmsnorm, hist and some CUDA optimization techniques (pack LDST, cp.async, warp gemv, sliced_k/split_k/pipeline gemm, bank conflicts reduce, WMMA/MMA, block/warp swizzle, etc).
CUDA Cores | Sliced K(Loop over K) | Tile Block | Tile Thread |
---|---|---|---|
✔️ | ✔️ | ✔️ | ✔️ |
WMMA(m16n16k16) | MMA(m16n8k16) | Pack LDST(128 bits) | SMEM Padding |
✔️ | ✔️ | ✔️ | ✔️ |
Copy Async | Tile MMA(More Threads) | Tile Warp(More Values) | Multi Stages |
✔️ | ✔️ | ✔️ | ✔️ |
Reg Double Buffers | Block Swizzle | Warp Swizzle | Collective Store(Shfl) |
✔️ | ✔️ | ✔️ | ✔️ |
Row Major(NN) | Col Major(TN) | SGEMM TF32 | SMEM Swizzle(Permute) |
✔️ | ✔️ | ✔️ | ❔ |
Currently, on NVIDIA L20, RTX 4090 and RTX 3090 Laptop, compared with cuBLAS's default Tensor Cores math algorithm CUBLAS_GEMM_DEFAULT_TENSOR_OP
, the HGEMM (WMMA and MMA)
implemented in this repo can achieve approximately 95%~98%
of its performance. Please check hgemm benchmark for more details.
- / = not supported now.
- ✔️ = known work and already supported now.
- ❔ = in my plan, but not coming soon, maybe a few weeks later.
- workflow: custom CUDA kernel impl -> PyTorch python binding -> Run tests.
- How to contribute? please check 🌤🌤Kernel Trace & 目标 & 代码规范 & 致谢🎉🎉
👉TIPS: * means using Tensor Cores(MMA/WMMA), otherwise, using CUDA Cores by default.
💡说明: 大佬们写的文章实在是太棒了,学到了很多东西。欢迎大家提PR推荐更多优秀的文章!
GNU General Public License v3.0
Welcome to 🌟👆🏻star & submit a PR to this repo!