🎉CUDA 笔记 / 大模型手撕CUDA / C++笔记,更新随缘: flash_attn、sgemm、sgemv、warp reduce、block reduce、dot product、elementwise、softmax、layernorm、rmsnorm、hist etc.
-
Updated
May 19, 2024 - Cuda
🎉CUDA 笔记 / 大模型手撕CUDA / C++笔记,更新随缘: flash_attn、sgemm、sgemv、warp reduce、block reduce、dot product、elementwise、softmax、layernorm、rmsnorm、hist etc.
Efficient kernel for RMS normalization with fused operations, includes both forward and backward passes, compatibility with PyTorch.
Simple character level Transformer
Generative models nano version for fun. No STOA here, nano first.
Add a description, image, and links to the rmsnorm topic page so that developers can more easily learn about it.
To associate your repository with the rmsnorm topic, visit your repo's landing page and select "manage topics."