Skip to content

Conversation

@syuoni
Copy link
Collaborator

@syuoni syuoni commented Jan 21, 2026

Description

This PR enables guided decoding with reasoning parsers (including harmony format), resolving:

We choose xgrammar's structural tag to enable this combination, offloading all the grammar-related things to grammar engines. This choice also reduces the risk to break higher-order feature combinations (e.g., with speculative decoding).

Test Coverage

PR Checklist

Please review the following before submitting your PR:

  • PR description clearly explains what and why. If using CodeRabbit's summary, please make sure it makes sense.

  • PR Follows TRT-LLM CODING GUIDELINES to the best of your knowledge.

  • Test cases are provided for new code paths (see test instructions)

  • Any new dependencies have been scanned for license and vulnerabilities

  • CODEOWNERS updated if ownership changes

  • Documentation updated as needed

  • Update tava architecture diagram if there is a significant design change in PR.

  • The reviewers assigned automatically/manually are appropriate for the PR.

  • Please check this after reviewing the above items as appropriate for this PR.

GitHub Bot Help

/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...

Provide a user friendly way for developers to interact with a Jenkins server.

Run /bot [-h|--help] to print this help message.

See details below for each supported subcommand.

Details

run [--reuse-test (optional)pipeline-id --disable-fail-fast --skip-test --stage-list "A10-PyTorch-1, xxx" --gpu-type "A30, H100_PCIe" --test-backend "pytorch, cpp" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" --detailed-log --debug(experimental)]

Launch build/test pipelines. All previously running jobs will be killed.

--reuse-test (optional)pipeline-id (OPTIONAL) : Allow the new pipeline to reuse build artifacts and skip successful test stages from a specified pipeline or the last pipeline if no pipeline-id is indicated. If the Git commit ID has changed, this option will be always ignored. The DEFAULT behavior of the bot is to reuse build artifacts and successful test results from the last pipeline.

--disable-reuse-test (OPTIONAL) : Explicitly prevent the pipeline from reusing build artifacts and skipping successful test stages from a previous pipeline. Ensure that all builds and tests are run regardless of previous successes.

--disable-fail-fast (OPTIONAL) : Disable fail fast on build/tests/infra failures.

--skip-test (OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.

--stage-list "A10-PyTorch-1, xxx" (OPTIONAL) : Only run the specified test stages. Examples: "A10-PyTorch-1, xxx". Note: Does NOT update GitHub check status.

--gpu-type "A30, H100_PCIe" (OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.

--test-backend "pytorch, cpp" (OPTIONAL) : Skip test stages which don't match the specified backends. Only support [pytorch, cpp, tensorrt, triton]. Examples: "pytorch, cpp" (does not run test stages with tensorrt or triton backend). Note: Does NOT update GitHub pipeline status.

--only-multi-gpu-test (OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.

--disable-multi-gpu-test (OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.

--add-multi-gpu-test (OPTIONAL) : Force run the multi-GPU tests in addition to running L0 pre-merge pipeline.

--post-merge (OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.

--extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" (OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx".

--detailed-log (OPTIONAL) : Enable flushing out all logs to the Jenkins console. This will significantly increase the log volume and may slow down the job.

--debug (OPTIONAL) : Experimental feature. Enable access to the CI container for debugging purpose. Note: Specify exactly one stage in the stage-list parameter to access the appropriate container environment. Note: Does NOT update GitHub check status.

For guidance on mapping tests to stage names, see docs/source/reference/ci-overview.md
and the scripts/test_to_stage_mapping.py helper.

kill

kill

Kill all running builds associated with pull request.

skip

skip --comment COMMENT

Skip testing for latest commit on pull request. --comment "Reason for skipping build/test" is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

reuse-pipeline

reuse-pipeline

Reuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

Summary by CodeRabbit

  • New Features

    • Added reasoning parser support to guided decoding in OpenAI-compatible API requests for chat and completion operations.
  • Tests

    • Expanded test coverage for guided decoding with parameterized model variants.

✏️ Tip: You can customize this high-level summary in your review settings.

@syuoni syuoni requested a review from a team as a code owner January 21, 2026 12:36
@syuoni syuoni self-assigned this Jan 21, 2026
@syuoni
Copy link
Collaborator Author

syuoni commented Jan 21, 2026

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #32949 [ run ] triggered by Bot. Commit: d442d6f

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Jan 21, 2026

📝 Walkthrough

Walkthrough

The changes integrate reasoning parser support into the OpenAI protocol handling layer, enabling optional rewrapping of guided decoding parameters with reasoning-oriented structural tags and propagating this capability through sampling parameter constructors in protocol serialization paths. Testing infrastructure is updated to parameterize model configurations across multiple test runs.

Changes

Cohort / File(s) Summary
Reasoning parser integration in OpenAI protocol
tensorrt_llm/serve/openai_protocol.py, tensorrt_llm/serve/openai_server.py, tensorrt_llm/serve/responses_utils.py
Added optional reasoning_parser parameter to guided decoding parameter generation functions. Core logic in _response_format_to_guided_decoding_params conditionally wraps response formats into structural_tag framing tailored for reasoning parsers. Parameter threaded through ChatCompletionRequest.to_sampling_params, CompletionRequest.to_sampling_params, ResponsesRequest.to_sampling_params, and request_preprocess in responses_utils. Server paths now propagate reasoning_parser from llm.args or default to "gpt_oss" for Harmony contexts.
Test parameterization for model variants
tests/integration/defs/test_e2e.py, tests/integration/test_lists/qa/llm_function_core.txt, tests/integration/test_lists/test-db/l0_*.yml, tests/integration/defs/.test_durations
Introduced @pytest.mark.parametrize decorator to test_openai_chat_guided_decoding with two model choices (Llama-3.1-8B-Instruct and gpt-oss-120b). Test duration entry and test list files updated to reflect parameterized variants.
Test fixture updates for model-specific configuration
tests/unittest/llmapi/apps/_test_openai_chat_guided_decoding.py
Added module-scoped parametrized model_name fixture. Updated temp_extra_llm_api_options_file to accept model_name and conditionally apply speculative_config for gpt-oss-120b. Adjusted server fixture to dispatch model_path conditionally. Increased max_seq_len and max_num_tokens from 1024 to 4096.

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

Possibly related PRs

Suggested reviewers

  • LinPoly
🚥 Pre-merge checks | ✅ 1 | ❌ 2
❌ Failed checks (1 warning, 1 inconclusive)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 14.29% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
Description check ❓ Inconclusive The description explains what is being fixed and why (using xgrammar's structural tag), but omits the required 'Test Coverage' section with specific test details. Complete the 'Test Coverage' section by listing the specific tests added to validate the new functionality (e.g., parametrized guided decoding tests).
✅ Passed checks (1 passed)
Check name Status Explanation
Title check ✅ Passed The title accurately summarizes the main change: enabling guided decoding with reasoning parsers, which is confirmed by all file changes across protocol handling, server integration, and test parameterization.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing touches
  • 📝 Generate docstrings

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
tensorrt_llm/serve/openai_protocol.py (1)

1-3: Add the NVIDIA copyright header.

This source file lacks the required NVIDIA copyright header with the latest meaningful modification year (2026). Please add the standard project header above the existing comments to meet compliance requirements.

🤖 Fix all issues with AI agents
In `@tensorrt_llm/serve/openai_protocol.py`:
- Line 40: The import of ReasoningParserFactory should preserve the module
namespace: change the direct symbol import to import the reasoning_parser module
(e.g., import tensorrt_llm.llmapi.reasoning_parser as reasoning_parser) and
update all usages like ReasoningParserFactory(...) to
reasoning_parser.ReasoningParserFactory(...); also update the other
occurrence(s) around the block referenced (uses at the lines corresponding to
276-277) to similarly qualify with reasoning_parser to follow the
namespace-preserving import guideline.

In `@tests/integration/defs/.test_durations`:
- Line 834: Add a missing duration entry for the "openai/gpt-oss-120b" variant
in the .test_durations list for the parametrized test
"test_e2e.py::test_openai_chat_guided_decoding": add a new key-value pair with
the exact test identifier
"test_e2e.py::test_openai_chat_guided_decoding[openai/gpt-oss-120b]" and a
numeric duration (e.g., match the existing Llama value 55.12449237401597 or use
an observed/estimated runtime) so the durations file contains entries for both
"meta-llama/Llama-3.1-8B-Instruct" and "openai/gpt-oss-120b".

In `@tests/integration/defs/test_e2e.py`:
- Around line 1702-1709: The test test_openai_chat_guided_decoding uses model
names containing "/" and "-" which break pytest's -k expression and also has an
unused llm_root parameter; update the signature to mark the unused param (e.g.,
_llm_root) and map each model_name to a safe identifier before invoking
llm_venv.run_cmd, then pass that safe identifier to the "-k" argument in
llm_venv.run_cmd so pytest receives a valid expression; adjust references to
model_name in the test body (and the parameter list) accordingly to use the
mapped safe key when filtering.

In `@tests/unittest/llmapi/apps/_test_openai_chat_guided_decoding.py`:
- Around line 27-40: The helper temp_extra_llm_api_options_file currently writes
to a fixed path "extra_llm_api_options.yaml" which can collide across parallel
tests; change it to create a unique temp file (e.g., use
tempfile.NamedTemporaryFile(delete=False) or tempfile.mkstemp or create a unique
temp dir via tempfile.mkdtemp) and write the YAML there, returning the unique
path; update any uses of temp_extra_llm_api_options_file to rely on the returned
path and ensure the file is removed after the test if needed—refer to the
temp_extra_llm_api_options_file function and the
extra_llm_api_options_dict/speculative_config keys to locate where to implement
this.
🧹 Nitpick comments (1)
tests/unittest/llmapi/apps/_test_openai_chat_guided_decoding.py (1)

19-23: Add explicit ids so nodeids are safe for -k filtering.

Default ids include / and -, which can make -k selection brittle. Explicit ids keep selection stable and machine-safe.

🔧 Suggested tweak
-@pytest.fixture(
-    scope="module",
-    params=["meta-llama/Llama-3.1-8B-Instruct", "openai/gpt-oss-120b"])
+@pytest.fixture(
+    scope="module",
+    params=["meta-llama/Llama-3.1-8B-Instruct", "openai/gpt-oss-120b"],
+    ids=["llama3_1_8b", "gpt_oss_120b"])
 def model_name(request):
     return request.param

@juney-nvidia juney-nvidia changed the base branch from release/1.2 to main January 21, 2026 12:59
@juney-nvidia juney-nvidia requested review from a team as code owners January 21, 2026 12:59
@juney-nvidia juney-nvidia changed the base branch from main to release/1.2 January 21, 2026 13:00
Signed-off-by: Enwei Zhu <21126786+syuoni@users.noreply.github.com>
Signed-off-by: Enwei Zhu <21126786+syuoni@users.noreply.github.com>
@syuoni syuoni force-pushed the guided-with-reasoning branch from d442d6f to 745e717 Compare January 21, 2026 13:21
@syuoni syuoni changed the base branch from release/1.2 to main January 21, 2026 13:22
@syuoni syuoni removed request for a team January 21, 2026 13:25
@syuoni syuoni removed request for a team, nv-guomingz and zeroepoch January 21, 2026 13:25
@syuoni
Copy link
Collaborator Author

syuoni commented Jan 21, 2026

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #32955 [ run ] triggered by Bot. Commit: 745e717

@tensorrt-cicd
Copy link
Collaborator

PR_Github #32955 [ run ] completed with state SUCCESS. Commit: 745e717
/LLM/main/L0_MergeRequest_PR pipeline #25485 completed with status: 'FAILURE'

⚠️ Action Required:

  • Please check the failed tests and fix your PR
  • If you cannot view the failures, ask the CI triggerer to share details
  • Once fixed, request an NVIDIA team member to trigger CI again

@syuoni
Copy link
Collaborator Author

syuoni commented Jan 22, 2026

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #33030 [ run ] triggered by Bot. Commit: 745e717

@tensorrt-cicd
Copy link
Collaborator

PR_Github #33030 [ run ] completed with state SUCCESS. Commit: 745e717
/LLM/main/L0_MergeRequest_PR pipeline #25536 completed with status: 'SUCCESS'

@syuoni syuoni changed the title [TRTLLM-10154][fix] Enable guided decoding with reasoning parsers [TRTLLM-10154][feat] Enable guided decoding with reasoning parsers Jan 22, 2026
Copy link
Collaborator

@LinPoly LinPoly left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@syuoni syuoni merged commit be4a431 into NVIDIA:main Jan 22, 2026
6 checks passed
greg-kwasniewski1 pushed a commit to nv-auto-deploy/TensorRT-LLM that referenced this pull request Jan 22, 2026
…VIDIA#10890)

Signed-off-by: Enwei Zhu <21126786+syuoni@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants