SOTA low-bit LLM quantization (INT8/FP8/INT4/FP4/NF4) & sparsity; leading model compression techniques on TensorFlow, PyTorch, and ONNX Runtime
-
Updated
Aug 6, 2025 - Python
SOTA low-bit LLM quantization (INT8/FP8/INT4/FP4/NF4) & sparsity; leading model compression techniques on TensorFlow, PyTorch, and ONNX Runtime
Slow, low-precision floating point types
Add a description, image, and links to the fp4 topic page so that developers can more easily learn about it.
To associate your repository with the fp4 topic, visit your repo's landing page and select "manage topics."