[#9460][feat] AutoDeploy: Fully-fledged GPT-OSS support#10968
[#9460][feat] AutoDeploy: Fully-fledged GPT-OSS support#10968Fridah-nv wants to merge 3 commits intoNVIDIA:mainfrom
Conversation
Signed-off-by: Fridah-nv <201670829+Fridah-nv@users.noreply.github.com>
|
@tcherckez-nvidia please let me know if there's update for #9810 and I can also test it here. |
📝 WalkthroughWalkthroughThis PR disables MOE post-load fusion in configuration and refactors torch MoE implementation to support gated MLPs with per-expert biases and expanded activation functions including SwigluBias. Adds GPT-OSS MoE patching capability for torch.export compatibility and corresponding validation tests. Changes
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes 🚥 Pre-merge checks | ✅ 1 | ❌ 2❌ Failed checks (1 warning, 1 inconclusive)
✅ Passed checks (1 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 2
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
tensorrt_llm/_torch/auto_deploy/custom_ops/fused_moe/torch_moe.py (1)
203-218: Silence unused bias args intorch_moe_fake.Ruff ARG001 will flag these new parameters as unused; mark them explicitly to keep lint clean.
🧹 Suggested fix
def torch_moe_fake( x: torch.Tensor, @@ w1_bias_stacked: Optional[torch.Tensor] = None, w2_bias_stacked: Optional[torch.Tensor] = None, w3_bias_stacked: Optional[torch.Tensor] = None, ) -> torch.Tensor: + _ = (w1_bias_stacked, w2_bias_stacked, w3_bias_stacked) return torch.empty_like(x)tests/unittest/_torch/auto_deploy/unit/singlegpu/custom_ops/test_ad_moe_op.py (1)
1-3: Add the required NVIDIA SPDX header to this test file.Tests are still source files and need the SPDX header with the latest modification year.
As per coding guidelines, please include the SPDX header with the latest modification year.📄 Proposed header addition
+ # SPDX-FileCopyrightText: Copyright (c) 2026 NVIDIA CORPORATION & AFFILIATES. All rights reserved. + # SPDX-License-Identifier: Apache-2.0 + # + # Licensed under the Apache License, Version 2.0 (the "License"); + # you may not use this file except in compliance with the License. + # You may obtain a copy of the License at + # + # http://www.apache.org/licenses/LICENSE-2.0 + # + # Unless required by applicable law or agreed to in writing, software + # distributed under the License is distributed on an "AS IS" BASIS, + # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + # See the License for the specific language governing permissions and + # limitations under the License. + import pytest import torch
🤖 Fix all issues with AI agents
In `@tensorrt_llm/_torch/auto_deploy/custom_ops/fused_moe/torch_moe.py`:
- Around line 26-31: In the ActivationType.Relu2 branch inside the act_fn
handling (the block returning the gated/non‑gated lambdas), keep the two-arg
signature for the gated lambda but mark the unused second parameter to satisfy
Ruff ARG005 (e.g., rename up to _up or _). Update the lambda defined as "lambda
gate, up: ..." to "lambda gate, _up: ..." (or similar underscore name) so the
unused parameter is explicit while preserving the required two-argument
signature.
- Around line 1-6: This file is missing the required NVIDIA SPDX
copyright/header; add the standard SPDX header comment block at the very top of
tensorrt_llm/_torch/auto_deploy/custom_ops/fused_moe/torch_moe.py (above all
imports such as "from typing import Callable, List, Optional" and "from
tensorrt_llm._torch.utils import ActivationType"), updating the year to the
latest modification year and including the NVIDIA copyright identifier and
license tag as per project guidelines.
🧹 Nitpick comments (2)
tensorrt_llm/_torch/auto_deploy/models/patches/gptoss.py (1)
22-26: Keep module namespaces in imports.Guidelines require importing modules (not symbols) to preserve namespaces.
As per coding guidelines, module namespaces should be preserved on import.♻️ Suggested refactor
-import torch - -from tensorrt_llm._torch.utils import ActivationType - -from ...export.interface import BaseExportPatch, ExportPatchRegistry +import torch +import tensorrt_llm._torch.utils as torch_utils +from ...export import interface as export_interface @@ - act_fn=int(ActivationType.SwigluBias), + act_fn=int(torch_utils.ActivationType.SwigluBias), @@ -@ExportPatchRegistry.register("hf_gptoss_moe") -class GptOssMoePatch(BaseExportPatch): +@export_interface.ExportPatchRegistry.register("hf_gptoss_moe") +class GptOssMoePatch(export_interface.BaseExportPatch):tests/unittest/_torch/auto_deploy/unit/singlegpu/custom_ops/test_ad_moe_op.py (1)
10-10: Use module import forActivationType.This keeps the namespace intact per guidelines.
As per coding guidelines, module namespaces should be preserved on import.♻️ Suggested refactor
-from tensorrt_llm._torch.utils import ActivationType +import tensorrt_llm._torch.utils as torch_utils @@ - act_fn=int(ActivationType.SwigluBias), + act_fn=int(torch_utils.ActivationType.SwigluBias),
| from typing import Callable, List, Optional | ||
|
|
||
| import torch | ||
| import torch.nn.functional as F | ||
|
|
||
| from tensorrt_llm._torch.utils import ActivationType |
There was a problem hiding this comment.
Add the required NVIDIA SPDX header.
This source file now has new logic but lacks the NVIDIA SPDX copyright header with the latest modification year.
📄 Proposed header addition
+ # SPDX-FileCopyrightText: Copyright (c) 2026 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+ # SPDX-License-Identifier: Apache-2.0
+ #
+ # Licensed under the Apache License, Version 2.0 (the "License");
+ # you may not use this file except in compliance with the License.
+ # You may obtain a copy of the License at
+ #
+ # http://www.apache.org/licenses/LICENSE-2.0
+ #
+ # Unless required by applicable law or agreed to in writing, software
+ # distributed under the License is distributed on an "AS IS" BASIS,
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ # See the License for the specific language governing permissions and
+ # limitations under the License.
+
from typing import Callable, List, Optional📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| from typing import Callable, List, Optional | |
| import torch | |
| import torch.nn.functional as F | |
| from tensorrt_llm._torch.utils import ActivationType | |
| # SPDX-FileCopyrightText: Copyright (c) 2026 NVIDIA CORPORATION & AFFILIATES. All rights reserved. | |
| # SPDX-License-Identifier: Apache-2.0 | |
| # | |
| # Licensed under the Apache License, Version 2.0 (the "License"); | |
| # you may not use this file except in compliance with the License. | |
| # You may obtain a copy of the License at | |
| # | |
| # http://www.apache.org/licenses/LICENSE-2.0 | |
| # | |
| # Unless required by applicable law or agreed to in writing, software | |
| # distributed under the License is distributed on an "AS IS" BASIS, | |
| # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | |
| # See the License for the specific language governing permissions and | |
| # limitations under the License. | |
| from typing import Callable, List, Optional | |
| import torch | |
| import torch.nn.functional as F | |
| from tensorrt_llm._torch.utils import ActivationType |
🤖 Prompt for AI Agents
In `@tensorrt_llm/_torch/auto_deploy/custom_ops/fused_moe/torch_moe.py` around
lines 1 - 6, This file is missing the required NVIDIA SPDX copyright/header; add
the standard SPDX header comment block at the very top of
tensorrt_llm/_torch/auto_deploy/custom_ops/fused_moe/torch_moe.py (above all
imports such as "from typing import Callable, List, Optional" and "from
tensorrt_llm._torch.utils import ActivationType"), updating the year to the
latest modification year and including the NVIDIA copyright identifier and
license tag as per project guidelines.
| elif act_fn == ActivationType.Relu2: | ||
| return ( | ||
| (lambda gate, up: torch.square(F.relu(gate))) | ||
| if is_gated | ||
| else (lambda x: torch.square(F.relu(x))) | ||
| ) |
There was a problem hiding this comment.
Mark the unused up argument in the ReLU2 gated lambda.
Ruff ARG005 flags the unused parameter; keep the two‑arg signature but mark the unused value.
🧹 Suggested fix
- (lambda gate, up: torch.square(F.relu(gate)))
+ (lambda gate, _up: torch.square(F.relu(gate)))🧰 Tools
🪛 Ruff (0.14.14)
28-28: Unused lambda argument: up
(ARG005)
🤖 Prompt for AI Agents
In `@tensorrt_llm/_torch/auto_deploy/custom_ops/fused_moe/torch_moe.py` around
lines 26 - 31, In the ActivationType.Relu2 branch inside the act_fn handling
(the block returning the gated/non‑gated lambdas), keep the two-arg signature
for the gated lambda but mark the unused second parameter to satisfy Ruff ARG005
(e.g., rename up to _up or _). Update the lambda defined as "lambda gate, up:
..." to "lambda gate, _up: ..." (or similar underscore name) so the unused
parameter is explicit while preserving the required two-argument signature.
|
For this PR:
ad 2.: if too complicated we can run it on the dashboard with fuse_moe disabled Follow-up PR: |
Signed-off-by: Fridah-nv <201670829+Fridah-nv@users.noreply.github.com>
Signed-off-by: Fridah-nv <201670829+Fridah-nv@users.noreply.github.com>
no update, AFAIU this PR fix the accuracy issue? |
|
If needed we can add generic config that disable fused moe, or a more specific one for gpt-oss |
closes #9460
Summary by CodeRabbit
New Features
Configuration Changes
Tests
✏️ Tip: You can customize this high-level summary in your review settings.
Description
Update torch_moe with:
Tested with
Outputs before/after are the same:
This will break GPT-OSS mxfp4 model support in AutoDeploy, support will be added in follow-up PR.
Test Coverage
PR Checklist
Please review the following before submitting your PR:
PR description clearly explains what and why. If using CodeRabbit's summary, please make sure it makes sense.
PR Follows TRT-LLM CODING GUIDELINES to the best of your knowledge.
Test cases are provided for new code paths (see test instructions)
Any new dependencies have been scanned for license and vulnerabilities
CODEOWNERS updated if ownership changes
Documentation updated as needed
Update tava architecture diagram if there is a significant design change in PR.
The reviewers assigned automatically/manually are appropriate for the PR.
Please check this after reviewing the above items as appropriate for this PR.
GitHub Bot Help
/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...Provide a user friendly way for developers to interact with a Jenkins server.
Run
/bot [-h|--help]to print this help message.See details below for each supported subcommand.
Details
run [--reuse-test (optional)pipeline-id --disable-fail-fast --skip-test --stage-list "A10-PyTorch-1, xxx" --gpu-type "A30, H100_PCIe" --test-backend "pytorch, cpp" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" --detailed-log --debug(experimental)]Launch build/test pipelines. All previously running jobs will be killed.
--reuse-test (optional)pipeline-id(OPTIONAL) : Allow the new pipeline to reuse build artifacts and skip successful test stages from a specified pipeline or the last pipeline if no pipeline-id is indicated. If the Git commit ID has changed, this option will be always ignored. The DEFAULT behavior of the bot is to reuse build artifacts and successful test results from the last pipeline.--disable-reuse-test(OPTIONAL) : Explicitly prevent the pipeline from reusing build artifacts and skipping successful test stages from a previous pipeline. Ensure that all builds and tests are run regardless of previous successes.--disable-fail-fast(OPTIONAL) : Disable fail fast on build/tests/infra failures.--skip-test(OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.--stage-list "A10-PyTorch-1, xxx"(OPTIONAL) : Only run the specified test stages. Examples: "A10-PyTorch-1, xxx". Note: Does NOT update GitHub check status.--gpu-type "A30, H100_PCIe"(OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.--test-backend "pytorch, cpp"(OPTIONAL) : Skip test stages which don't match the specified backends. Only support [pytorch, cpp, tensorrt, triton]. Examples: "pytorch, cpp" (does not run test stages with tensorrt or triton backend). Note: Does NOT update GitHub pipeline status.--only-multi-gpu-test(OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.--disable-multi-gpu-test(OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.--add-multi-gpu-test(OPTIONAL) : Force run the multi-GPU tests in addition to running L0 pre-merge pipeline.--post-merge(OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.--extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx"(OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx".--detailed-log(OPTIONAL) : Enable flushing out all logs to the Jenkins console. This will significantly increase the log volume and may slow down the job.--debug(OPTIONAL) : Experimental feature. Enable access to the CI container for debugging purpose. Note: Specify exactly one stage in thestage-listparameter to access the appropriate container environment. Note: Does NOT update GitHub check status.For guidance on mapping tests to stage names, see
docs/source/reference/ci-overview.mdand the
scripts/test_to_stage_mapping.pyhelper.kill
killKill all running builds associated with pull request.
skip
skip --comment COMMENTSkip testing for latest commit on pull request.
--comment "Reason for skipping build/test"is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.reuse-pipeline
reuse-pipelineReuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.