flashinfer-ai / flashinfer
FlashInfer: Kernel Library for LLM Serving
See what the GitHub community is most excited about today.
FlashInfer: Kernel Library for LLM Serving
SpargeAttention: A training-free sparse attention that can accelerate any model inference.
Tile primitives for speedy kernels
cuVS - a library for vector search and clustering on the GPU
NCCL Tests
CUDA Kernel Benchmarking Library
DeepEP: an efficient expert-parallel communication library
CUDA accelerated rasterization of gaussian splatting
Quantized Attention achieves speedup of 2-3x and 3-5x compared to FlashAttention and xformers, without lossing end-to-end metrics across language, image, and video models.
PyTorch bindings for CUTLASS grouped GEMM.
Causal depthwise conv1d in CUDA, with a PyTorch interface
cuGraph - RAPIDS Graph Analytics Library
FlashMLA: Efficient MLA decoding kernels
Instant neural graphics primitives: lightning fast NeRF and more
RAFT contains fundamental widely-used algorithms and primitives for machine learning and information retrieval. The algorithms are CUDA-accelerated and form building blocks for more easily writing high performance applications.
A massively parallel, optimal functional runtime in Rust
This package contains the original 2012 AlexNet code.
LLM training in simple, raw C/CUDA
CUDA Library Samples
[MICRO'23, MLSys'22] TorchSparse: Efficient Training and Inference Framework for Sparse Convolution on GPUs.