[#12183][fix] Fix TRTLLM-Gen NVFP4 MoE scales for mixed-precision che…#12240
[#12183][fix] Fix TRTLLM-Gen NVFP4 MoE scales for mixed-precision che…#12240tcherckez-nvidia merged 1 commit intoNVIDIA:mainfrom
Conversation
|
/bot run --extra-stage "DGX_B200-4_GPUs-AutoDeploy-1, DGX_B200-AutoDeploy-1" |
📝 WalkthroughWalkthroughThe changes update the NVFP4 MoE fusion backend configuration from trtllm to cutlass as the default, rework input scale handling logic with conditional fallback behavior for inconsistent scales, update test configurations with a new model variant, and introduce parametrized moe_backend testing for the Nemotron SuperV3 accuracy tests. Changes
Estimated code review effort🎯 4 (Complex) | ⏱️ ~35 minutes Possibly related PRs
Suggested reviewers
🚥 Pre-merge checks | ❌ 3❌ Failed checks (2 warnings, 1 inconclusive)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
📝 Coding Plan
Comment |
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@tests/integration/defs/accuracy/test_llm_api_autodeploy.py`:
- Line 425: The test's parametrize uses invalid backend value "trtllm" causing
Pydantic validation failures when instantiating FuseNVFP4MoeConfig (backend is
Literal["cutlass","trtllm_gen"]); update the pytest.mark.parametrize call for
"moe_backend" to use the allowed values (e.g., ["cutlass","trtllm_gen"]) so
tests provide valid inputs for FuseNVFP4MoeConfig.backend and avoid
instantiation errors.
In `@tests/integration/test_lists/test-db/l0_b200.yml`:
- Around line 250-252: Test entries use an invalid backend value "trtllm" which
violates FuseNVFP4MoeConfig.backend (Literal["cutlass","trtllm_gen"]); update
the test parametrization that lists ["trtllm","trtllm_gen"] and any YAML entries
like nvfp4-1-attn_dp_off-trtllm-trtllm to use a valid literal (replace "trtllm"
with either "trtllm_gen" or "cutlass") so the pydantic validation for
FuseNVFP4MoeConfig.backend succeeds.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 6ee8ad2a-104a-4a0b-8bbb-51c4c54dfe43
📒 Files selected for processing (6)
examples/auto_deploy/super_v3.yamltensorrt_llm/_torch/auto_deploy/config/default.yamltensorrt_llm/_torch/auto_deploy/transform/library/fused_moe.pytests/integration/defs/accuracy/test_llm_api_autodeploy.pytests/integration/test_lists/test-db/l0_b200.ymltests/integration/test_lists/test-db/l0_dgx_b200.yml
|
PR_Github #39056 [ run ] triggered by Bot. Commit: |
|
PR_Github #39056 [ run ] completed with state
|
a30596b to
d2d92ce
Compare
|
/bot run --extra-stage "DGX_B200-4_GPUs-AutoDeploy-1, DGX_B200-AutoDeploy-1" |
|
PR_Github #39069 [ run ] triggered by Bot. Commit: |
|
PR_Github #39069 [ run ] completed with state
|
d2d92ce to
473a984
Compare
|
/bot run --extra-stage "DGX_B200-4_GPUs-AutoDeploy-1, DGX_B200-AutoDeploy-1" |
|
PR_Github #39089 [ run ] triggered by Bot. Commit: |
|
PR_Github #39089 [ run ] completed with state
|
|
/bot run --extra-stage "DGX_B200-4_GPUs-AutoDeploy-1, DGX_B200-AutoDeploy-1" |
|
PR_Github #39411 [ run ] triggered by Bot. Commit: |
|
PR_Github #39411 [ run ] completed with state
|
473a984 to
2a2d2e8
Compare
|
/bot run --extra-stage "DGX_B200-4_GPUs-AutoDeploy-1, DGX_B200-AutoDeploy-1" |
|
PR_Github #39552 [ run ] triggered by Bot. Commit: |
|
PR_Github #39552 [ run ] completed with state |
|
/bot reuse-pipeline |
|
/bot help |
GitHub Bot Help
Provide a user friendly way for developers to interact with a Jenkins server. Run See details below for each supported subcommand. Details
Launch build/test pipelines. All previously running jobs will be killed.
kill
Kill all running builds associated with pull request. skip
Skip testing for latest commit on pull request. reuse-pipeline
Reuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break. |
|
PR_Github #39599 [ reuse-pipeline ] triggered by Bot. Commit: |
2a2d2e8 to
abf8107
Compare
|
/bot reuse-pipeline |
|
PR_Github #39599 [ reuse-pipeline ] completed with state |
|
PR_Github #39600 [ reuse-pipeline ] triggered by Bot. Commit: |
|
PR_Github #39600 [ reuse-pipeline ] completed with state |
…on checkpoints - Pass per-expert fc2_alpha (no min normalization) so w2 input scale variation across experts no longer distorts logits when sampling. - For fc1: require w1/w3 input scales to match across experts; use first expert's scale when same, use min when allow_different_input_scales. - Note: the trtllm kernel was renamed to cutlass. - Moved to use the official Super v3 NVFP4 model. Signed-off-by: Tal Cherckez <127761168+tcherckez-nvidia@users.noreply.github.com>
abf8107 to
5e40d9c
Compare
|
/bot reuse-pipeline |
|
PR_Github #39608 [ reuse-pipeline ] triggered by Bot. Commit: |
|
PR_Github #39608 [ reuse-pipeline ] completed with state |
…ckpoints
Made-with: Cursor
Summary by CodeRabbit
Configuration
Bug Fixes
Tests
Description
Test Coverage
PR Checklist
Please review the following before submitting your PR:
PR description clearly explains what and why. If using CodeRabbit's summary, please make sure it makes sense.
PR Follows TRT-LLM CODING GUIDELINES to the best of your knowledge.
Test cases are provided for new code paths (see test instructions)
Any new dependencies have been scanned for license and vulnerabilities
CODEOWNERS updated if ownership changes
Documentation updated as needed
Update tava architecture diagram if there is a significant design change in PR.
The reviewers assigned automatically/manually are appropriate for the PR.
Please check this after reviewing the above items as appropriate for this PR.
GitHub Bot Help
To see a list of available CI bot commands, please comment
/bot help.