Skip to content

[Quantization] Remove FP4 emulation; Fall-back to marlin for device < 100 #19563

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 3 commits into from
Jun 16, 2025

Conversation

dsikka
Copy link
Contributor

@dsikka dsikka commented Jun 12, 2025

Purpose

  • Remove emulation code in nvfp4 for compressed-tensors
  • Fall-back to use marlin if running on an unsupported device/runtime
  • Move emulation code to utils

Test Plan

  • Rerun quantization nvfp4a16 and nvfp4 tests
  • Ran tests on h100 and b200 with the correct kernel selected

Signed-off-by: Dipika <dipikasikka1@gmail.com>
Copy link

👋 Hi! Thank you for contributing to the vLLM project.

💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels.

Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge.

🚀

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Summary of Changes

Hello @dsikka, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request refactors the NVFP4 quantization implementation by removing the software emulation path from the core W4A4 scheme. Instead, it introduces a fallback mechanism to the W4A16 scheme for platforms that do not natively support the required CUTLASS FP4 operations (specifically targeting devices below capability 100). The removed emulation code has been relocated to a utility module.

Highlights

  • FP4 Emulation Removal: Removed the software-based FP4 emulation logic directly from the CompressedTensorsW4A4Fp4 scheme implementation.
  • Fallback Mechanism: Introduced a check for native CUTLASS FP4 support (requiring device capability 100+) in the scheme selection logic. If not supported, it now falls back to using the CompressedTensorsW4A16Fp4 scheme.
  • Minimum Capability Update: Updated the minimum required device capability for the CompressedTensorsW4A4Fp4 scheme from 80 to 100, aligning it with the requirement for native CUTLASS FP4 support.
  • Emulation Code Relocation: Moved the removed FP4 emulation logic into a new utility function run_nvfp4_emulations within nvfp4_emulation_utils.py.
  • W4A16 Scheme Adaptation: Modified the CompressedTensorsW4A16Fp4 scheme to optionally handle an input global scale, necessary when it's used as a fallback for the W4A4 scheme.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in issue comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configureGemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request removes FP4 emulation from the CompressedTensorsW4A4Fp4 scheme, relocates the emulation logic to a utility file, and implements a fallback to CompressedTensorsW4A16Fp4 (Marlin) for devices with compute capability < 100. A minor typo in a parameter name (has_input_globa_scale) should be corrected.

Comment on lines 28 to +29
def get_min_capability(cls) -> int:
# dont restrict as emulations
return 80

def run_nvfp4_emulations(self, x: torch.Tensor, layer):
x_m, x_k = x.shape
output_dtype = x.dtype

# quantize input to (FP4 and interleaved block scale)
x_fp4, x_blockscale = ref_nvfp4_quant(x, layer.input_global_scale,
self.group_size)
return 100
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Changing get_min_capability from 80 to 100 aligns with the PR's goal.

Comment on lines +31 to +38
@classmethod
def cutlass_fp4_supported(cls) -> bool:
if not current_platform.is_cuda():
return False
capability_tuple = current_platform.get_device_capability()
capability = -1 if capability_tuple is None else capability_tuple.to_int( # noqa: E501
)
return cutlass_scaled_mm_supports_fp4(capability)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Moving the cutlass_fp4_supported check to a class method is a good refactoring.

Comment on lines +107 to +133
def run_nvfp4_emulations(x: torch.Tensor, input_global_scale: torch.Tensor,
weight: torch.Tensor,
weight_scale_swizzled: torch.Tensor,
weight_global_scale: torch.Tensor):
group_size = 16
x_m, x_k = x.shape
output_dtype = x.dtype

# quantize input to (FP4 and interleaved block scale)
x_fp4, x_blockscale = ref_nvfp4_quant(x, input_global_scale, group_size)

# dequantize input
x_fp4 = x_fp4.reshape(x_m, x_k // group_size, group_size)
x_blockscale = x_blockscale.unsqueeze(-1) / input_global_scale
x_dq = (x_fp4 * x_blockscale).reshape(x_m, x_k).to(output_dtype)
del x_fp4, x_blockscale

# dequantize weight
w_fp4 = weight.data.view(torch.uint8)
w_dq = dequantize_to_dtype(w_fp4, weight_scale_swizzled.data,
weight_global_scale, output_dtype, x.device,
group_size)

# matmul
out = torch.matmul(x_dq, w_dq.t())
del w_dq, x_dq
return out
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Moving the run_nvfp4_emulations function here from the scheme class is a good separation of concerns.

dsikka added 2 commits June 12, 2025 11:59
Signed-off-by: Dipika <dipikasikka1@gmail.com>
Signed-off-by: Dipika <dipikasikka1@gmail.com>
@dsikka dsikka marked this pull request as ready for review June 12, 2025 23:44
@dsikka
Copy link
Contributor Author

dsikka commented Jun 15, 2025

@mgoin @robertgshaw2-redhat
Can I get a ready label

@mgoin mgoin added the ready ONLY add when PR is ready to merge/full CI is needed label Jun 15, 2025
@mgoin mgoin merged commit 6bc7b57 into vllm-project:main Jun 16, 2025
80 checks passed
yeqcharlotte pushed a commit to yeqcharlotte/vllm that referenced this pull request Jun 22, 2025
minpeter pushed a commit to minpeter/vllm that referenced this pull request Jun 24, 2025
yangw-dev pushed a commit to yangw-dev/vllm that referenced this pull request Jun 24, 2025
xjpang pushed a commit to xjpang/vllm that referenced this pull request Jun 30, 2025
wseaton pushed a commit to wseaton/vllm that referenced this pull request Jun 30, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ready ONLY add when PR is ready to merge/full CI is needed
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants