Add FP8 kernel acceleration for compressed-tensors quantized models#45699
Add FP8 kernel acceleration for compressed-tensors quantized models#45699jiqing-feng wants to merge 10 commits intohuggingface:mainfrom
Conversation
Signed-off-by: jiqing-feng <jiqing.feng@intel.com>
Signed-off-by: jiqing-feng <jiqing.feng@intel.com>
|
cc @SunMarc |
| - XPU: torch._scaled_mm | ||
| - CUDA: fbgemm.f8f8bf16_rowwise |
There was a problem hiding this comment.
Indeed, the model is dequantized on the fly even if run_compressed=True (it worked before but compressed-tensors preferred to delegate this work to vllm as it was duplicate work). We could add support for the most used methods here in transformers but it would be nice to use kernels from vllm if possible. Also, I don't know if this is worth using fbgemm. torchao don't use this lib anymore and it was quite a struggle to get this installed. The best would be to use kernels hosted on kernels-community as we do for our current fp8 support.
There was a problem hiding this comment.
I've removed the fbgemm op and only use kernels-community.
Signed-off-by: jiqing-feng <jiqing.feng@intel.com>
Signed-off-by: jiqing-feng <jiqing.feng@intel.com>
Signed-off-by: jiqing-feng <jiqing.feng@intel.com>
Signed-off-by: jiqing-feng <jiqing.feng@intel.com>
Signed-off-by: jiqing-feng <jiqing.feng@intel.com>
|
Hi @SunMarc . Please check it the integration is ok. I'll clean the tests and doc after you approved the integration. |
What does this PR do?
This PR adds native FP8 matmul kernel support for compressed-tensors FP8 quantized models in transformers. Previously, compressed-tensors FP8 models were loaded via the
compressed-tensorslibrary and dequantized back to FP16/BF16 for inference. With this change, FP8 weights are kept in FP8 format and inference uses hardware-accelerated FP8 matmul kernels (torch._scaled_mmon XPU,fbgemm.f8f8bf16_rowwiseon CUDA).Key changes:
New file:
src/transformers/integrations/compressed_tensors_fp8.pyCTFP8Linear: FP8 linear layer that stores weights in FP8 and uses row-wise FP8 matmul kernels. Activations are dynamically quantized per-row viaquantize_fp8_per_row.CompressedTensorsScaleConvert,CompressedTensorsFp8Dequantize) to handle the checkpoint format conversion (e.g.weight_scale→weight_scale_inv).CTFP8PerRowQuantize: Online quantization support — quantize BF16 weights to FP8 per-row on-the-fly during model loading.Modified:
src/transformers/quantizers/quantizer_compressed_tensors.pyCompressedTensorsHfQuantizernow detects FP8 quantization configs (floattype,num_bits=8) and automatically routes to the FP8 kernel path when GPU/XPU is available. Falls back to the default compressed-tensors dequantize path on CPU.get_weight_conversions()andget_quantize_ops()to support both pre-quantized loading and online quantization.Modified:
src/transformers/quantizers/auto.pySupported models
CompressedTensorsConfigwith FP8 quantization scheme.Usage
Pre-quantized model (no config needed)
Online quantization
Devices
torch._scaled_mmfbgemm.f8f8bf16_rowwise@sywangyi