-
-
Notifications
You must be signed in to change notification settings - Fork 8.4k
[Quantization] Remove FP4 emulation; Fall-back to marlin for device < 100 #19563
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Signed-off-by: Dipika <dipikasikka1@gmail.com>
👋 Hi! Thank you for contributing to the vLLM project. 💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels. Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can either: Add 🚀 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Summary of Changes
Hello @dsikka, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!
This pull request refactors the NVFP4 quantization implementation by removing the software emulation path from the core W4A4 scheme. Instead, it introduces a fallback mechanism to the W4A16 scheme for platforms that do not natively support the required CUTLASS FP4 operations (specifically targeting devices below capability 100). The removed emulation code has been relocated to a utility module.
Highlights
- FP4 Emulation Removal: Removed the software-based FP4 emulation logic directly from the
CompressedTensorsW4A4Fp4
scheme implementation. - Fallback Mechanism: Introduced a check for native CUTLASS FP4 support (requiring device capability 100+) in the scheme selection logic. If not supported, it now falls back to using the
CompressedTensorsW4A16Fp4
scheme. - Minimum Capability Update: Updated the minimum required device capability for the
CompressedTensorsW4A4Fp4
scheme from 80 to 100, aligning it with the requirement for native CUTLASS FP4 support. - Emulation Code Relocation: Moved the removed FP4 emulation logic into a new utility function
run_nvfp4_emulations
withinnvfp4_emulation_utils.py
. - W4A16 Scheme Adaptation: Modified the
CompressedTensorsW4A16Fp4
scheme to optionally handle an input global scale, necessary when it's used as a fallback for the W4A4 scheme.
Using Gemini Code Assist
The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.
Invoking Gemini
You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command>
or @gemini-code-assist <command>
. Below is a summary of the supported commands.
Feature | Command | Description |
---|---|---|
Code Review | /gemini review |
Performs a code review for the current pull request in its current state. |
Pull Request Summary | /gemini summary |
Provides a summary of the current pull request in its current state. |
Comment | @gemini-code-assist | Responds in comments when explicitly tagged, both in issue comments and review comments. |
Help | /gemini help |
Displays a list of available commands. |
Customization
To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/
folder in the base of the repository. Detailed instructions can be found here.
Limitations & Feedback
Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback.
You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.
Footnotes
-
Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configureGemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution. ↩
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request removes FP4 emulation from the CompressedTensorsW4A4Fp4
scheme, relocates the emulation logic to a utility file, and implements a fallback to CompressedTensorsW4A16Fp4
(Marlin) for devices with compute capability < 100. A minor typo in a parameter name (has_input_globa_scale
) should be corrected.
vllm/model_executor/layers/quantization/compressed_tensors/compressed_tensors.py
Outdated
Show resolved
Hide resolved
def get_min_capability(cls) -> int: | ||
# dont restrict as emulations | ||
return 80 | ||
|
||
def run_nvfp4_emulations(self, x: torch.Tensor, layer): | ||
x_m, x_k = x.shape | ||
output_dtype = x.dtype | ||
|
||
# quantize input to (FP4 and interleaved block scale) | ||
x_fp4, x_blockscale = ref_nvfp4_quant(x, layer.input_global_scale, | ||
self.group_size) | ||
return 100 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@classmethod | ||
def cutlass_fp4_supported(cls) -> bool: | ||
if not current_platform.is_cuda(): | ||
return False | ||
capability_tuple = current_platform.get_device_capability() | ||
capability = -1 if capability_tuple is None else capability_tuple.to_int( # noqa: E501 | ||
) | ||
return cutlass_scaled_mm_supports_fp4(capability) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
def run_nvfp4_emulations(x: torch.Tensor, input_global_scale: torch.Tensor, | ||
weight: torch.Tensor, | ||
weight_scale_swizzled: torch.Tensor, | ||
weight_global_scale: torch.Tensor): | ||
group_size = 16 | ||
x_m, x_k = x.shape | ||
output_dtype = x.dtype | ||
|
||
# quantize input to (FP4 and interleaved block scale) | ||
x_fp4, x_blockscale = ref_nvfp4_quant(x, input_global_scale, group_size) | ||
|
||
# dequantize input | ||
x_fp4 = x_fp4.reshape(x_m, x_k // group_size, group_size) | ||
x_blockscale = x_blockscale.unsqueeze(-1) / input_global_scale | ||
x_dq = (x_fp4 * x_blockscale).reshape(x_m, x_k).to(output_dtype) | ||
del x_fp4, x_blockscale | ||
|
||
# dequantize weight | ||
w_fp4 = weight.data.view(torch.uint8) | ||
w_dq = dequantize_to_dtype(w_fp4, weight_scale_swizzled.data, | ||
weight_global_scale, output_dtype, x.device, | ||
group_size) | ||
|
||
# matmul | ||
out = torch.matmul(x_dq, w_dq.t()) | ||
del w_dq, x_dq | ||
return out |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Signed-off-by: Dipika <dipikasikka1@gmail.com>
@mgoin @robertgshaw2-redhat |
… 100 (vllm-project#19563) Signed-off-by: minpeter <kali2005611@gmail.com>
… 100 (vllm-project#19563) Signed-off-by: Yang Wang <elainewy@meta.com>
Purpose
Test Plan