Skip to content

[https://nvbugs/6011517][fix] Fix autotuner OOM for trtllmGen MoE runners at large context length#12523

Merged
longlee0622 merged 2 commits intoNVIDIA:mainfrom
hyukn:fix/6011517
Apr 1, 2026
Merged

[https://nvbugs/6011517][fix] Fix autotuner OOM for trtllmGen MoE runners at large context length#12523
longlee0622 merged 2 commits intoNVIDIA:mainfrom
hyukn:fix/6011517

Conversation

@hyukn
Copy link
Copy Markdown
Collaborator

@hyukn hyukn commented Mar 25, 2026

Summary by CodeRabbit

  • Chores
    • Updated model optimization tuning configuration to enhance inference performance.

Description

Cap tune_max_num_tokens=8192 in TuningConfig for FP4BlockScaleMoERunner. Without this cap, the autotuner uses raw input dimensions (e.g. 1M+ tokens at 128k context) to generate profiling shapes, causing OOM or CUDA_ERROR_INVALID_VALUE during tactic profiling.

Test Coverage

  • Verify DeepSeek-V3 serving at 128k context length no longer OOMs during autotuning
  • Verify autotuner still produces valid tactics at smaller context lengths

PR Checklist

Please review the following before submitting your PR:

  • PR description clearly explains what and why. If using CodeRabbit's summary, please make sure it makes sense.

  • PR Follows TRT-LLM CODING GUIDELINES to the best of your knowledge.

  • Test cases are provided for new code paths (see test instructions)

  • Any new dependencies have been scanned for license and vulnerabilities

  • CODEOWNERS updated if ownership changes

  • Documentation updated as needed

  • Update tava architecture diagram if there is a significant design change in PR.

  • The reviewers assigned automatically/manually are appropriate for the PR.

  • Please check this after reviewing the above items as appropriate for this PR.

GitHub Bot Help

To see a list of available CI bot commands, please comment /bot help.

…ners at large context length

Set tune_max_num_tokens=8192 in TuningConfig for FP4BlockScaleMoERunner trtllmGen MoE runners. Without this cap, the autotuner uses the raw input dimension (e.g. 1M+ tokens at 128k context) to generate profiling shapes, causing OOM or CUDA_ERROR_INVALID_VALUE during tactic profiling.

Signed-off-by: Yukun He <23156053+hyukn@users.noreply.github.com>
@hyukn hyukn requested a review from a team as a code owner March 25, 2026 04:14
@hyukn hyukn requested a review from yizhang-nv March 25, 2026 04:14
@hyukn
Copy link
Copy Markdown
Collaborator Author

hyukn commented Mar 25, 2026

/bot run

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai bot commented Mar 25, 2026

📝 Walkthrough

Walkthrough

Updated the FP4BlockScaleMoERunner.get_tuning_config() method to pass tune_max_num_tokens=8192 as a parameter to the TuningConfig constructor, modifying the autotuner configuration for this runner.

Changes

Cohort / File(s) Summary
Autotuner Configuration
tensorrt_llm/_torch/custom_ops/trtllm_gen_custom_ops.py
Added tune_max_num_tokens=8192 parameter to TuningConfig constructor in FP4BlockScaleMoERunner.get_tuning_config() method.

Estimated code review effort

🎯 1 (Trivial) | ⏱️ ~2 minutes

🚥 Pre-merge checks | ✅ 3
✅ Passed checks (3 passed)
Check name Status Explanation
Title check ✅ Passed The title clearly summarizes the main change: fixing OOM issues in the autotuner for MoE runners at large context lengths by capping tuning parameters.
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.
Description check ✅ Passed The pull request description is well-structured, clearly explains the issue and solution, includes test coverage details, and confirms all PR checklist items are addressed.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Comment @coderabbitai help to get the list of available commands and usage tips.

@tensorrt-cicd
Copy link
Copy Markdown
Collaborator

PR_Github #40234 [ run ] triggered by Bot. Commit: 37ff110 Link to invocation

@tensorrt-cicd
Copy link
Copy Markdown
Collaborator

PR_Github #40234 [ run ] completed with state SUCCESS. Commit: 37ff110
/LLM/main/L0_MergeRequest_PR pipeline #31369 completed with status: 'FAILURE'

CI Report

⚠️ Action Required:

  • Please check the failed tests and fix your PR
  • If you cannot view the failures, ask the CI triggerer to share details
  • Once fixed, request an NVIDIA team member to trigger CI again

Link to invocation

@hyukn
Copy link
Copy Markdown
Collaborator Author

hyukn commented Mar 30, 2026

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Copy Markdown
Collaborator

PR_Github #40673 [ run ] triggered by Bot. Commit: 08d6add Link to invocation

@tensorrt-cicd
Copy link
Copy Markdown
Collaborator

PR_Github #40673 [ run ] completed with state FAILURE. Commit: 08d6add

Link to invocation

@hyukn
Copy link
Copy Markdown
Collaborator Author

hyukn commented Mar 31, 2026

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Copy Markdown
Collaborator

PR_Github #40827 [ run ] triggered by Bot. Commit: 08d6add Link to invocation

@tensorrt-cicd
Copy link
Copy Markdown
Collaborator

PR_Github #40827 [ run ] completed with state SUCCESS. Commit: 08d6add
/LLM/main/L0_MergeRequest_PR pipeline #31838 completed with status: 'FAILURE'

CI Report

⚠️ Action Required:

  • Please check the failed tests and fix your PR
  • If you cannot view the failures, ask the CI triggerer to share details
  • Once fixed, request an NVIDIA team member to trigger CI again

Link to invocation

@longlee0622
Copy link
Copy Markdown
Collaborator

/bot run --disable-fail-fast

@longlee0622 longlee0622 enabled auto-merge (squash) April 1, 2026 03:40
@tensorrt-cicd
Copy link
Copy Markdown
Collaborator

PR_Github #41098 [ run ] triggered by Bot. Commit: 08d6add Link to invocation

@tensorrt-cicd
Copy link
Copy Markdown
Collaborator

PR_Github #41098 [ run ] completed with state SUCCESS. Commit: 08d6add
/LLM/main/L0_MergeRequest_PR pipeline #32072 completed with status: 'SUCCESS'

CI Report

Link to invocation

@longlee0622 longlee0622 merged commit c2e1e8d into NVIDIA:main Apr 1, 2026
5 checks passed
karen-sy pushed a commit to karen-sy/TensorRT-LLM that referenced this pull request Apr 7, 2026
…ners at large context length (NVIDIA#12523)

Signed-off-by: Yukun He <23156053+hyukn@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants