Skip to content

[#12183][fix] Fix TRTLLM-Gen NVFP4 MoE scales for mixed-precision che…#12240

Merged
tcherckez-nvidia merged 1 commit intoNVIDIA:mainfrom
tcherckez-nvidia:fix-trtllmgen
Mar 19, 2026
Merged

[#12183][fix] Fix TRTLLM-Gen NVFP4 MoE scales for mixed-precision che…#12240
tcherckez-nvidia merged 1 commit intoNVIDIA:mainfrom
tcherckez-nvidia:fix-trtllmgen

Conversation

@tcherckez-nvidia
Copy link
Collaborator

@tcherckez-nvidia tcherckez-nvidia commented Mar 16, 2026

…ckpoints

  • Pass per-expert fc2_alpha (no min normalization) so w2 input scale variation across experts no longer distorts logits when sampling.
  • For fc1: require w1/w3 input scales to match across experts; use first expert's scale when same, use min when allow_different_input_scales.
  • Note: the trtllm kernel was renamed to cutlass.
  • More test flavors were added to accuracy testing.
  • Moved to use the official Super v3 NVFP4 model.

Made-with: Cursor

Summary by CodeRabbit

  • Configuration

    • Changed default NVFP4 MoE fusion backend from trtllm to cutlass across all configurations.
  • Bug Fixes

    • Improved FP8 NVFP4 input scale handling with flexible configuration and fallback warnings instead of strict failures.
  • Tests

    • Expanded test coverage with parameterized MOE backend selection for comprehensive validation.
    • Updated model variant for NVFP4 accuracy testing.

Description

Test Coverage

PR Checklist

Please review the following before submitting your PR:

  • PR description clearly explains what and why. If using CodeRabbit's summary, please make sure it makes sense.

  • PR Follows TRT-LLM CODING GUIDELINES to the best of your knowledge.

  • Test cases are provided for new code paths (see test instructions)

  • Any new dependencies have been scanned for license and vulnerabilities

  • CODEOWNERS updated if ownership changes

  • Documentation updated as needed

  • Update tava architecture diagram if there is a significant design change in PR.

  • The reviewers assigned automatically/manually are appropriate for the PR.

  • Please check this after reviewing the above items as appropriate for this PR.

GitHub Bot Help

To see a list of available CI bot commands, please comment /bot help.

@tcherckez-nvidia
Copy link
Collaborator Author

/bot run --extra-stage "DGX_B200-4_GPUs-AutoDeploy-1, DGX_B200-AutoDeploy-1"

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Mar 16, 2026

📝 Walkthrough

Walkthrough

The changes update the NVFP4 MoE fusion backend configuration from trtllm to cutlass as the default, rework input scale handling logic with conditional fallback behavior for inconsistent scales, update test configurations with a new model variant, and introduce parametrized moe_backend testing for the Nemotron SuperV3 accuracy tests.

Changes

Cohort / File(s) Summary
Configuration Updates
examples/auto_deploy/super_v3.yaml, tensorrt_llm/_torch/auto_deploy/config/default.yaml
Backend changed from trtllm to trtllm_gen and cutlass respectively for the fuse_nvfp4_moe transform.
Core MoE Fusion Implementation
tensorrt_llm/_torch/auto_deploy/transform/library/fused_moe.py
Reworked FP8 NVFP4 input scale handling with conditional fallback for different scales; updated FuseNVFP4MoeConfig.backend from Literal["trtllm", "trtllm_gen"] to Literal["cutlass", "trtllm_gen"] with default "cutlass"; changed backend dispatch logic to use cutlass path; modified per-expert vs global scale decision flow with per-expert fc2_alpha handling and adjusted gated MLP scale checks.
Test Implementation
tests/integration/defs/accuracy/test_llm_api_autodeploy.py
Updated NVFP4 model path variant; added moe_backend parametrized parameter to test_accuracy method signature; wired moe_backend into test configuration for transforms.fuse_nvfp4_moe.backend.
Test Coverage
tests/integration/test_lists/test-db/l0_b200.yml, tests/integration/test_lists/test-db/l0_dgx_b200.yml
Expanded test coverage by replacing single test entries with multiple variants covering different attn_dp configurations and backend combinations (trtllm and trtllm_gen).

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~35 minutes

Possibly related PRs

Suggested reviewers

  • suyoggupta
  • marinayanov
  • jieli-matrix
🚥 Pre-merge checks | ❌ 3

❌ Failed checks (2 warnings, 1 inconclusive)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
Description check ⚠️ Warning The PR description is missing a proper title following the required format and lacks detailed explanation in the Description section. Add a properly formatted title like [#12240][fix] Fix NVFP4 MoE scales for mixed-precision checkpoints. Expand the Description section with detailed rationale and implementation details.
Title check ❓ Inconclusive The PR title references issue #12183 and identifies the change type as [fix], but is truncated with ellipsis and doesn't clearly convey the main objective of fixing NVFP4 MoE scales for mixed-precision checkpoints. Complete the title to clearly state the full objective, e.g., '[#12183][fix] Fix NVFP4 MoE scales handling for mixed-precision checkpoints and rename trtllm backend to cutlass.'

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
📝 Coding Plan
  • Generate coding plan for human review comments

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@tests/integration/defs/accuracy/test_llm_api_autodeploy.py`:
- Line 425: The test's parametrize uses invalid backend value "trtllm" causing
Pydantic validation failures when instantiating FuseNVFP4MoeConfig (backend is
Literal["cutlass","trtllm_gen"]); update the pytest.mark.parametrize call for
"moe_backend" to use the allowed values (e.g., ["cutlass","trtllm_gen"]) so
tests provide valid inputs for FuseNVFP4MoeConfig.backend and avoid
instantiation errors.

In `@tests/integration/test_lists/test-db/l0_b200.yml`:
- Around line 250-252: Test entries use an invalid backend value "trtllm" which
violates FuseNVFP4MoeConfig.backend (Literal["cutlass","trtllm_gen"]); update
the test parametrization that lists ["trtllm","trtllm_gen"] and any YAML entries
like nvfp4-1-attn_dp_off-trtllm-trtllm to use a valid literal (replace "trtllm"
with either "trtllm_gen" or "cutlass") so the pydantic validation for
FuseNVFP4MoeConfig.backend succeeds.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: 6ee8ad2a-104a-4a0b-8bbb-51c4c54dfe43

📥 Commits

Reviewing files that changed from the base of the PR and between 503d678 and a30596b.

📒 Files selected for processing (6)
  • examples/auto_deploy/super_v3.yaml
  • tensorrt_llm/_torch/auto_deploy/config/default.yaml
  • tensorrt_llm/_torch/auto_deploy/transform/library/fused_moe.py
  • tests/integration/defs/accuracy/test_llm_api_autodeploy.py
  • tests/integration/test_lists/test-db/l0_b200.yml
  • tests/integration/test_lists/test-db/l0_dgx_b200.yml

@tensorrt-cicd
Copy link
Collaborator

PR_Github #39056 [ run ] triggered by Bot. Commit: a30596b Link to invocation

@tensorrt-cicd
Copy link
Collaborator

PR_Github #39056 [ run ] completed with state SUCCESS. Commit: a30596b
/LLM/main/L0_MergeRequest_PR pipeline #30325 completed with status: 'FAILURE'

CI Report

⚠️ Action Required:

  • Please check the failed tests and fix your PR
  • If you cannot view the failures, ask the CI triggerer to share details
  • Once fixed, request an NVIDIA team member to trigger CI again

Link to invocation

@tcherckez-nvidia
Copy link
Collaborator Author

/bot run --extra-stage "DGX_B200-4_GPUs-AutoDeploy-1, DGX_B200-AutoDeploy-1"

@tensorrt-cicd
Copy link
Collaborator

PR_Github #39069 [ run ] triggered by Bot. Commit: d2d92ce Link to invocation

@tensorrt-cicd
Copy link
Collaborator

PR_Github #39069 [ run ] completed with state FAILURE. Commit: d2d92ce
/LLM/main/L0_MergeRequest_PR pipeline #30334 completed with status: 'FAILURE'

CI Report

⚠️ Action Required:

  • Please check the failed tests and fix your PR
  • If you cannot view the failures, ask the CI triggerer to share details
  • Once fixed, request an NVIDIA team member to trigger CI again

Link to invocation

@tcherckez-nvidia
Copy link
Collaborator Author

/bot run --extra-stage "DGX_B200-4_GPUs-AutoDeploy-1, DGX_B200-AutoDeploy-1"

@tensorrt-cicd
Copy link
Collaborator

PR_Github #39089 [ run ] triggered by Bot. Commit: 473a984 Link to invocation

@tensorrt-cicd
Copy link
Collaborator

PR_Github #39089 [ run ] completed with state SUCCESS. Commit: 473a984
/LLM/main/L0_MergeRequest_PR pipeline #30351 completed with status: 'FAILURE'

CI Report

⚠️ Action Required:

  • Please check the failed tests and fix your PR
  • If you cannot view the failures, ask the CI triggerer to share details
  • Once fixed, request an NVIDIA team member to trigger CI again

Link to invocation

@tcherckez-nvidia
Copy link
Collaborator Author

/bot run --extra-stage "DGX_B200-4_GPUs-AutoDeploy-1, DGX_B200-AutoDeploy-1"

@tensorrt-cicd
Copy link
Collaborator

PR_Github #39411 [ run ] triggered by Bot. Commit: 473a984 Link to invocation

@tensorrt-cicd
Copy link
Collaborator

PR_Github #39411 [ run ] completed with state SUCCESS. Commit: 473a984
/LLM/main/L0_MergeRequest_PR pipeline #30640 completed with status: 'FAILURE'

CI Report

⚠️ Action Required:

  • Please check the failed tests and fix your PR
  • If you cannot view the failures, ask the CI triggerer to share details
  • Once fixed, request an NVIDIA team member to trigger CI again

Link to invocation

@tcherckez-nvidia
Copy link
Collaborator Author

/bot run --extra-stage "DGX_B200-4_GPUs-AutoDeploy-1, DGX_B200-AutoDeploy-1"

@tensorrt-cicd
Copy link
Collaborator

PR_Github #39552 [ run ] triggered by Bot. Commit: 2a2d2e8 Link to invocation

@tensorrt-cicd
Copy link
Collaborator

PR_Github #39552 [ run ] completed with state SUCCESS. Commit: 2a2d2e8
/LLM/main/L0_MergeRequest_PR pipeline #30769 completed with status: 'SUCCESS'

CI Report

Link to invocation

@tcherckez-nvidia
Copy link
Collaborator Author

/bot reuse-pipeline

@tcherckez-nvidia
Copy link
Collaborator Author

/bot help

@github-actions
Copy link

GitHub Bot Help

/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...

Provide a user friendly way for developers to interact with a Jenkins server.

Run /bot [-h|--help] to print this help message.

See details below for each supported subcommand.

Details

run [--reuse-test (optional)pipeline-id --disable-fail-fast --skip-test --stage-list "A10-PyTorch-1, xxx" --gpu-type "A30, H100_PCIe" --test-backend "pytorch, cpp" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" --detailed-log --debug(experimental) --high-priority]

Launch build/test pipelines. All previously running jobs will be killed.

--reuse-test (optional)pipeline-id (OPTIONAL) : Allow the new pipeline to reuse build artifacts and skip successful test stages from a specified pipeline or the last pipeline if no pipeline-id is indicated. If the Git commit ID has changed, this option will be always ignored. The DEFAULT behavior of the bot is to reuse build artifacts and successful test results from the last pipeline.

--disable-reuse-test (OPTIONAL) : Explicitly prevent the pipeline from reusing build artifacts and skipping successful test stages from a previous pipeline. Ensure that all builds and tests are run regardless of previous successes.

--disable-fail-fast (OPTIONAL) : Disable fail fast on build/tests/infra failures.

--skip-test (OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.

--stage-list "A10-PyTorch-1, xxx" (OPTIONAL) : Only run the specified test stages. Examples: "A10-PyTorch-1, xxx". Note: Does NOT update GitHub check status.

--gpu-type "A30, H100_PCIe" (OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.

--test-backend "pytorch, cpp" (OPTIONAL) : Skip test stages which don't match the specified backends. Only support [pytorch, cpp, tensorrt, triton]. Examples: "pytorch, cpp" (does not run test stages with tensorrt or triton backend). Note: Does NOT update GitHub pipeline status.

--only-multi-gpu-test (OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.

--disable-multi-gpu-test (OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.

--add-multi-gpu-test (OPTIONAL) : Force run the multi-GPU tests in addition to running L0 pre-merge pipeline.

--post-merge (OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.

--extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" (OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx".

--detailed-log (OPTIONAL) : Enable flushing out all logs to the Jenkins console. This will significantly increase the log volume and may slow down the job.

--debug (OPTIONAL) : Experimental feature. Enable access to the CI container for debugging purpose. Note: Specify exactly one stage in the stage-list parameter to access the appropriate container environment. Note: Does NOT update GitHub check status.

--high-priority (OPTIONAL) : Run the pipeline with high priority. This option is restricted to authorized users only and will route the job to a high-priority queue.

kill

kill

Kill all running builds associated with pull request.

skip

skip --comment COMMENT

Skip testing for latest commit on pull request. --comment "Reason for skipping build/test" is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

reuse-pipeline

reuse-pipeline

Reuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

@tensorrt-cicd
Copy link
Collaborator

PR_Github #39599 [ reuse-pipeline ] triggered by Bot. Commit: 2a2d2e8 Link to invocation

@tcherckez-nvidia
Copy link
Collaborator Author

/bot reuse-pipeline

@tensorrt-cicd
Copy link
Collaborator

PR_Github #39599 [ reuse-pipeline ] completed with state SUCCESS. Commit: 2a2d2e8
Reusing PR_Github #39552 for commit 2a2d2e8

Link to invocation

@tensorrt-cicd
Copy link
Collaborator

PR_Github #39600 [ reuse-pipeline ] triggered by Bot. Commit: abf8107 Link to invocation

@tensorrt-cicd
Copy link
Collaborator

PR_Github #39600 [ reuse-pipeline ] completed with state SUCCESS. Commit: abf8107
Reusing PR_Github #39552 for commit abf8107

Link to invocation

…on checkpoints

- Pass per-expert fc2_alpha (no min normalization) so w2 input scale
  variation across experts no longer distorts logits when sampling.
- For fc1: require w1/w3 input scales to match across experts; use
  first expert's scale when same, use min when allow_different_input_scales.
- Note: the trtllm kernel was renamed to cutlass.
- Moved to use the official Super v3 NVFP4 model.

Signed-off-by: Tal Cherckez <127761168+tcherckez-nvidia@users.noreply.github.com>
@tcherckez-nvidia
Copy link
Collaborator Author

/bot reuse-pipeline

@tcherckez-nvidia tcherckez-nvidia enabled auto-merge (squash) March 19, 2026 14:39
@tcherckez-nvidia tcherckez-nvidia enabled auto-merge (squash) March 19, 2026 14:39
@tensorrt-cicd
Copy link
Collaborator

PR_Github #39608 [ reuse-pipeline ] triggered by Bot. Commit: 5e40d9c Link to invocation

@tensorrt-cicd
Copy link
Collaborator

PR_Github #39608 [ reuse-pipeline ] completed with state SUCCESS. Commit: 5e40d9c
Reusing PR_Github #39552 for commit 5e40d9c

Link to invocation

@tcherckez-nvidia tcherckez-nvidia merged commit 9f0c204 into NVIDIA:main Mar 19, 2026
5 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants