Skip to content

[#9460][feat] AutoDeploy: Fully-fledged GPT-OSS support#10968

Draft
Fridah-nv wants to merge 3 commits intoNVIDIA:mainfrom
nv-auto-deploy:user/fridah/gptoss-moe-update
Draft

[#9460][feat] AutoDeploy: Fully-fledged GPT-OSS support#10968
Fridah-nv wants to merge 3 commits intoNVIDIA:mainfrom
nv-auto-deploy:user/fridah/gptoss-moe-update

Conversation

@Fridah-nv
Copy link
Collaborator

@Fridah-nv Fridah-nv commented Jan 24, 2026

closes #9460

Summary by CodeRabbit

  • New Features

    • Added per-expert bias support for MoE operations
    • Extended activation function support for gated and ungated MLP paths
    • Added GPT-OSS model patch for improved torch.export compatibility
  • Configuration Changes

    • Disabled MOE fusion by default in post-load configuration
  • Tests

    • Added test coverage for GPT-OSS style MoE operations

✏️ Tip: You can customize this high-level summary in your review settings.

Description

  • Map BF16 model's MoE pattern to torch_moe and deprecate torch_moe_dense_mlp
    Update torch_moe with:
  1. Support expert bias
  2. Add SwigluBias activation function
  • Map to optimized kernel

Tested with

model: unsloth/gpt-oss-20b-BF16
args:
  mode: graph
  world_size: 1
  runtime: trtllm
  compile_backend: torch-simple
  attn_backend: torch
  model_factory: AutoModelForCausalLM
  skip_loading_weights: false
  disable_overlap_scheduler: true
  kv_cache_config:
    enable_block_reuse: false
  model_kwargs:
    torch_dtype: bfloat16
benchmark:
  enabled: false
prompt:
  sp_kwargs:
    top_k: 0
    temperature: 0
dry_run: false

Outputs before/after are the same:

[01/30/2026-23:14:31] [TRT-LLM AUTO-DEPLOY] [I] [PROMPT 0] How big is the universe? : 1.3 trillion light years? 13.8 billion years? 13.8 billion years? 13.8 billion years? 13.8 billion years? 13.8 billion years? 13.8 billion years? 13.8 billion years? 13.8 billion years? 13.8 billion years? 13.8 billion years? 13.8 billion years? 13.8? 13.8? 13.8? 
[01/30/2026-23:14:31] [TRT-LLM AUTO-DEPLOY] [I] [PROMPT 1] In simple words and a single sentence, explain the concept of gravity: : 1) The 2nd law of Newton's law?

The second law of Newton's law states that the force acting on an object is equal to the mass of the object multiplied by its acceleration.

The second law of

The second law of Newton's law states that the force acting on an object is equal to the mass of the object multiplied by its acceleration.

The second law of Newton's law states that the force acting on an object is equal to the mass of the object multiplied

This will break GPT-OSS mxfp4 model support in AutoDeploy, support will be added in follow-up PR.

Test Coverage

PR Checklist

Please review the following before submitting your PR:

  • PR description clearly explains what and why. If using CodeRabbit's summary, please make sure it makes sense.

  • PR Follows TRT-LLM CODING GUIDELINES to the best of your knowledge.

  • Test cases are provided for new code paths (see test instructions)

  • Any new dependencies have been scanned for license and vulnerabilities

  • CODEOWNERS updated if ownership changes

  • Documentation updated as needed

  • Update tava architecture diagram if there is a significant design change in PR.

  • The reviewers assigned automatically/manually are appropriate for the PR.

  • Please check this after reviewing the above items as appropriate for this PR.

GitHub Bot Help

/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...

Provide a user friendly way for developers to interact with a Jenkins server.

Run /bot [-h|--help] to print this help message.

See details below for each supported subcommand.

Details

run [--reuse-test (optional)pipeline-id --disable-fail-fast --skip-test --stage-list "A10-PyTorch-1, xxx" --gpu-type "A30, H100_PCIe" --test-backend "pytorch, cpp" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" --detailed-log --debug(experimental)]

Launch build/test pipelines. All previously running jobs will be killed.

--reuse-test (optional)pipeline-id (OPTIONAL) : Allow the new pipeline to reuse build artifacts and skip successful test stages from a specified pipeline or the last pipeline if no pipeline-id is indicated. If the Git commit ID has changed, this option will be always ignored. The DEFAULT behavior of the bot is to reuse build artifacts and successful test results from the last pipeline.

--disable-reuse-test (OPTIONAL) : Explicitly prevent the pipeline from reusing build artifacts and skipping successful test stages from a previous pipeline. Ensure that all builds and tests are run regardless of previous successes.

--disable-fail-fast (OPTIONAL) : Disable fail fast on build/tests/infra failures.

--skip-test (OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.

--stage-list "A10-PyTorch-1, xxx" (OPTIONAL) : Only run the specified test stages. Examples: "A10-PyTorch-1, xxx". Note: Does NOT update GitHub check status.

--gpu-type "A30, H100_PCIe" (OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.

--test-backend "pytorch, cpp" (OPTIONAL) : Skip test stages which don't match the specified backends. Only support [pytorch, cpp, tensorrt, triton]. Examples: "pytorch, cpp" (does not run test stages with tensorrt or triton backend). Note: Does NOT update GitHub pipeline status.

--only-multi-gpu-test (OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.

--disable-multi-gpu-test (OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.

--add-multi-gpu-test (OPTIONAL) : Force run the multi-GPU tests in addition to running L0 pre-merge pipeline.

--post-merge (OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.

--extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" (OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx".

--detailed-log (OPTIONAL) : Enable flushing out all logs to the Jenkins console. This will significantly increase the log volume and may slow down the job.

--debug (OPTIONAL) : Experimental feature. Enable access to the CI container for debugging purpose. Note: Specify exactly one stage in the stage-list parameter to access the appropriate container environment. Note: Does NOT update GitHub check status.

For guidance on mapping tests to stage names, see docs/source/reference/ci-overview.md
and the scripts/test_to_stage_mapping.py helper.

kill

kill

Kill all running builds associated with pull request.

skip

skip --comment COMMENT

Skip testing for latest commit on pull request. --comment "Reason for skipping build/test" is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

reuse-pipeline

reuse-pipeline

Reuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

Signed-off-by: Fridah-nv <201670829+Fridah-nv@users.noreply.github.com>
@Fridah-nv Fridah-nv self-assigned this Jan 24, 2026
@Fridah-nv
Copy link
Collaborator Author

@tcherckez-nvidia please let me know if there's update for #9810 and I can also test it here.

@Fridah-nv Fridah-nv marked this pull request as ready for review January 26, 2026 21:09
@Fridah-nv Fridah-nv requested a review from a team as a code owner January 26, 2026 21:09
@Fridah-nv Fridah-nv requested a review from QiJune January 26, 2026 21:09
@Fridah-nv Fridah-nv marked this pull request as draft January 26, 2026 21:10
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Jan 26, 2026

📝 Walkthrough

Walkthrough

This PR disables MOE post-load fusion in configuration and refactors torch MoE implementation to support gated MLPs with per-expert biases and expanded activation functions including SwigluBias. Adds GPT-OSS MoE patching capability for torch.export compatibility and corresponding validation tests.

Changes

Cohort / File(s) Summary
Configuration
tensorrt_llm/_torch/auto_deploy/config/default.yaml
Disabled MOE post-load fusion by toggling fuse_moe.enabled from true to false.
Core MoE Implementation
tensorrt_llm/_torch/auto_deploy/custom_ops/fused_moe/torch_moe.py
Refactored activation resolution from _resolve_t to _resolve_act_fn to support gated and ungated MLP paths. Added per-expert bias stacking parameters (w1_bias_stacked, w2_bias_stacked, w3_bias_stacked) to torch_moe, torch_moe_fake, torch_quant_fp8_moe, and torch_quant_nvfp4_moe functions. Extended activation support including SwigluBias, Silu/Swiglu with gating, and Relu2 variants. Updated MLP construction to handle both gated (W1/W2/W3) and ungated (W1/W2) weight configurations with optional biases.
GPT-OSS Patching
tensorrt_llm/_torch/auto_deploy/models/patches/gptoss.py
Introduced new GptOssMoePatch class for torch.export compatibility, implementing _forward_gptoss_mlp to compute dense MoE-like forward using torch_moe with SwigluBias activation. Assembles per-expert weights and biases from existing projections and applies hooking to replace GPT-OSS MLP forward method.
Unit Tests
tests/unittest/_torch/auto_deploy/unit/singlegpu/custom_ops/test_ad_moe_op.py
Added reference_gptoss_moe reference function implementing SwigluBias-based computation with per-expert gates and routing. Introduced test_gptoss_style_moe test validating torch MoE implementation against reference across FP16 and BF16 dtypes with varying expert counts.

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

🚥 Pre-merge checks | ✅ 1 | ❌ 2
❌ Failed checks (1 warning, 1 inconclusive)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 41.67% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
Description check ❓ Inconclusive The PR description contains relevant technical changes and testing details, but lacks clarity on key sections. Clarify the description by providing a concise 'Description' section explaining what and why separate from implementation details. Explicitly list 'Test Coverage' with specific test names and their purpose. Ensure the PR Checklist is properly completed with checkmarks for all applicable items.
✅ Passed checks (1 passed)
Check name Status Explanation
Title check ✅ Passed The title clearly indicates the main feature: adding full GPT-OSS support to AutoDeploy, which aligns with the substantial changes across MoE configuration, torch_moe implementation, and test coverage.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (2)
tensorrt_llm/_torch/auto_deploy/custom_ops/fused_moe/torch_moe.py (1)

203-218: Silence unused bias args in torch_moe_fake.

Ruff ARG001 will flag these new parameters as unused; mark them explicitly to keep lint clean.

🧹 Suggested fix
 def torch_moe_fake(
     x: torch.Tensor,
@@
     w1_bias_stacked: Optional[torch.Tensor] = None,
     w2_bias_stacked: Optional[torch.Tensor] = None,
     w3_bias_stacked: Optional[torch.Tensor] = None,
 ) -> torch.Tensor:
+    _ = (w1_bias_stacked, w2_bias_stacked, w3_bias_stacked)
     return torch.empty_like(x)
tests/unittest/_torch/auto_deploy/unit/singlegpu/custom_ops/test_ad_moe_op.py (1)

1-3: Add the required NVIDIA SPDX header to this test file.

Tests are still source files and need the SPDX header with the latest modification year.

📄 Proposed header addition
+ # SPDX-FileCopyrightText: Copyright (c) 2026 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+ # SPDX-License-Identifier: Apache-2.0
+ #
+ # Licensed under the Apache License, Version 2.0 (the "License");
+ # you may not use this file except in compliance with the License.
+ # You may obtain a copy of the License at
+ #
+ # http://www.apache.org/licenses/LICENSE-2.0
+ #
+ # Unless required by applicable law or agreed to in writing, software
+ # distributed under the License is distributed on an "AS IS" BASIS,
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ # See the License for the specific language governing permissions and
+ # limitations under the License.
+
 import pytest
 import torch
As per coding guidelines, please include the SPDX header with the latest modification year.
🤖 Fix all issues with AI agents
In `@tensorrt_llm/_torch/auto_deploy/custom_ops/fused_moe/torch_moe.py`:
- Around line 26-31: In the ActivationType.Relu2 branch inside the act_fn
handling (the block returning the gated/non‑gated lambdas), keep the two-arg
signature for the gated lambda but mark the unused second parameter to satisfy
Ruff ARG005 (e.g., rename up to _up or _). Update the lambda defined as "lambda
gate, up: ..." to "lambda gate, _up: ..." (or similar underscore name) so the
unused parameter is explicit while preserving the required two-argument
signature.
- Around line 1-6: This file is missing the required NVIDIA SPDX
copyright/header; add the standard SPDX header comment block at the very top of
tensorrt_llm/_torch/auto_deploy/custom_ops/fused_moe/torch_moe.py (above all
imports such as "from typing import Callable, List, Optional" and "from
tensorrt_llm._torch.utils import ActivationType"), updating the year to the
latest modification year and including the NVIDIA copyright identifier and
license tag as per project guidelines.
🧹 Nitpick comments (2)
tensorrt_llm/_torch/auto_deploy/models/patches/gptoss.py (1)

22-26: Keep module namespaces in imports.

Guidelines require importing modules (not symbols) to preserve namespaces.

♻️ Suggested refactor
-import torch
-
-from tensorrt_llm._torch.utils import ActivationType
-
-from ...export.interface import BaseExportPatch, ExportPatchRegistry
+import torch
+import tensorrt_llm._torch.utils as torch_utils
+from ...export import interface as export_interface
@@
-        act_fn=int(ActivationType.SwigluBias),
+        act_fn=int(torch_utils.ActivationType.SwigluBias),
@@
-@ExportPatchRegistry.register("hf_gptoss_moe")
-class GptOssMoePatch(BaseExportPatch):
+@export_interface.ExportPatchRegistry.register("hf_gptoss_moe")
+class GptOssMoePatch(export_interface.BaseExportPatch):
As per coding guidelines, module namespaces should be preserved on import.
tests/unittest/_torch/auto_deploy/unit/singlegpu/custom_ops/test_ad_moe_op.py (1)

10-10: Use module import for ActivationType.

This keeps the namespace intact per guidelines.

♻️ Suggested refactor
-from tensorrt_llm._torch.utils import ActivationType
+import tensorrt_llm._torch.utils as torch_utils
@@
-            act_fn=int(ActivationType.SwigluBias),
+            act_fn=int(torch_utils.ActivationType.SwigluBias),
As per coding guidelines, module namespaces should be preserved on import.

Comment on lines +1 to 6
from typing import Callable, List, Optional

import torch
import torch.nn.functional as F

from tensorrt_llm._torch.utils import ActivationType
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Add the required NVIDIA SPDX header.

This source file now has new logic but lacks the NVIDIA SPDX copyright header with the latest modification year.

📄 Proposed header addition
+ # SPDX-FileCopyrightText: Copyright (c) 2026 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+ # SPDX-License-Identifier: Apache-2.0
+ #
+ # Licensed under the Apache License, Version 2.0 (the "License");
+ # you may not use this file except in compliance with the License.
+ # You may obtain a copy of the License at
+ #
+ # http://www.apache.org/licenses/LICENSE-2.0
+ #
+ # Unless required by applicable law or agreed to in writing, software
+ # distributed under the License is distributed on an "AS IS" BASIS,
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ # See the License for the specific language governing permissions and
+ # limitations under the License.
+
 from typing import Callable, List, Optional
As per coding guidelines, please add/update the SPDX header with the latest modification year.
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
from typing import Callable, List, Optional
import torch
import torch.nn.functional as F
from tensorrt_llm._torch.utils import ActivationType
# SPDX-FileCopyrightText: Copyright (c) 2026 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
# SPDX-License-Identifier: Apache-2.0
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from typing import Callable, List, Optional
import torch
import torch.nn.functional as F
from tensorrt_llm._torch.utils import ActivationType
🤖 Prompt for AI Agents
In `@tensorrt_llm/_torch/auto_deploy/custom_ops/fused_moe/torch_moe.py` around
lines 1 - 6, This file is missing the required NVIDIA SPDX copyright/header; add
the standard SPDX header comment block at the very top of
tensorrt_llm/_torch/auto_deploy/custom_ops/fused_moe/torch_moe.py (above all
imports such as "from typing import Callable, List, Optional" and "from
tensorrt_llm._torch.utils import ActivationType"), updating the year to the
latest modification year and including the NVIDIA copyright identifier and
license tag as per project guidelines.

Comment on lines 26 to +31
elif act_fn == ActivationType.Relu2:
return (
(lambda gate, up: torch.square(F.relu(gate)))
if is_gated
else (lambda x: torch.square(F.relu(x)))
)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Mark the unused up argument in the ReLU2 gated lambda.

Ruff ARG005 flags the unused parameter; keep the two‑arg signature but mark the unused value.

🧹 Suggested fix
-            (lambda gate, up: torch.square(F.relu(gate)))
+            (lambda gate, _up: torch.square(F.relu(gate)))
🧰 Tools
🪛 Ruff (0.14.14)

28-28: Unused lambda argument: up

(ARG005)

🤖 Prompt for AI Agents
In `@tensorrt_llm/_torch/auto_deploy/custom_ops/fused_moe/torch_moe.py` around
lines 26 - 31, In the ActivationType.Relu2 branch inside the act_fn handling
(the block returning the gated/non‑gated lambdas), keep the two-arg signature
for the gated lambda but mark the unused second parameter to satisfy Ruff ARG005
(e.g., rename up to _up or _). Update the lambda defined as "lambda gate, up:
..." to "lambda gate, _up: ..." (or similar underscore name) so the unused
parameter is explicit while preserving the required two-argument signature.

@lucaslie
Copy link
Member

For this PR:

  1. clean up bf16 support
  2. add support fuse_moe
  3. register on the dashboard with the correct parameters: https://github.com/NVIDIA/TensorRT-LLM/tree/main/examples/auto_deploy/model_registry

ad 2.: if too complicated we can run it on the dashboard with fuse_moe disabled

Follow-up PR:
repeat for mxfp4

Signed-off-by: Fridah-nv <201670829+Fridah-nv@users.noreply.github.com>
Signed-off-by: Fridah-nv <201670829+Fridah-nv@users.noreply.github.com>
@tcherckez-nvidia
Copy link
Collaborator

@tcherckez-nvidia please let me know if there's update for #9810 and I can also test it here.

no update, AFAIU this PR fix the accuracy issue?

@tcherckez-nvidia
Copy link
Collaborator

If needed we can add generic config that disable fused moe, or a more specific one for gpt-oss

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[Feature]: AutoDeploy: Fully-fledged GPT-OSS support

3 participants