Skip to content

[None][test] update function multi nodes test#12075

Merged
xinhe-nv merged 15 commits intoNVIDIA:mainfrom
xinhe-nv:test
Mar 18, 2026
Merged

[None][test] update function multi nodes test#12075
xinhe-nv merged 15 commits intoNVIDIA:mainfrom
xinhe-nv:test

Conversation

@xinhe-nv
Copy link
Collaborator

@xinhe-nv xinhe-nv commented Mar 10, 2026

Summary by CodeRabbit

  • Chores
    • Refactored test infrastructure to streamline command execution and optimize timeout parameters for evaluation runs.
    • Updated multi-node test configurations and reduced overall test suite scope for improved execution efficiency.

Description

Test Coverage

PR Checklist

Please review the following before submitting your PR:

  • PR description clearly explains what and why. If using CodeRabbit's summary, please make sure it makes sense.

  • PR Follows TRT-LLM CODING GUIDELINES to the best of your knowledge.

  • Test cases are provided for new code paths (see test instructions)

  • Any new dependencies have been scanned for license and vulnerabilities

  • CODEOWNERS updated if ownership changes

  • Documentation updated as needed

  • Update tava architecture diagram if there is a significant design change in PR.

  • The reviewers assigned automatically/manually are appropriate for the PR.

  • Please check this after reviewing the above items as appropriate for this PR.

GitHub Bot Help

To see a list of available CI bot commands, please comment /bot help.

@xinhe-nv xinhe-nv force-pushed the test branch 4 times, most recently from 275d32e to 8969c1f Compare March 16, 2026 01:47
@xinhe-nv xinhe-nv marked this pull request as ready for review March 16, 2026 05:02
@xinhe-nv xinhe-nv requested review from a team as code owners March 16, 2026 05:02
@xinhe-nv xinhe-nv enabled auto-merge (squash) March 16, 2026 05:02
@xinhe-nv
Copy link
Collaborator Author

/bot run --skip-test

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Mar 16, 2026

📝 Walkthrough

Walkthrough

The changes remove the "trtllm-llmapi-launch" wrapper from test command invocations, reduce timeout values from 7200 to 5400 seconds, adjust multi-node test parameterization, and remove multiple test cases and entire test functions from the multinode test matrix to narrow coverage scope.

Changes

Cohort / File(s) Summary
Test execution command updates
tests/integration/defs/test_e2e.py
Removes launcher wrapper prefix "trtllm-llmapi-launch" from run command sequences; reduces subprocess call timeout from 7200 to 5400 seconds; updates multi-node test parameterization (adjusts tp_size, pp_size, ep_size combinations) and removes several model variants from quickstart matrix; deletes entire test_ptp_quickstart_advanced_llama_multi_nodes function.
Multinode test matrix trimming
tests/integration/test_lists/qa/llm_function_multinode.txt
Removes 10 test case entries including DeepSeek-V3, chat/eval tests across Llama, Qwen, DeepSeek, and Nemotron models, and OpenAI service discovery tests.

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~10 minutes

🚥 Pre-merge checks | ❌ 3

❌ Failed checks (2 warnings, 1 inconclusive)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
Description check ⚠️ Warning The PR description lacks required content. The author provided only a template with minimal substantive details: no meaningful description section, no test coverage explanation, and no clear explanation of what/why the changes were made. Add a clear description explaining the purpose of the test updates, which test cases were modified and why, the impact of timeout reductions and model matrix trimming, and provide explicit test coverage details for validation.
Title check ❓ Inconclusive The title '[None][test] update function multi nodes test' is vague and generic, using non-descriptive phrasing that doesn't clearly convey the specific changes made. Replace with a more specific title that captures the main changes, such as '[None][test] Remove launcher wrapper and adjust timeouts in multi-node tests' or similar.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
📝 Coding Plan
  • Generate coding plan for human review comments

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
tests/integration/defs/test_e2e.py (1)

3063-3068: ⚠️ Potential issue | 🟡 Minor

Stale comment: trtllm-llmapi-launch wrapper has been removed.

The comment on line 3063-3064 states "run the command with trtllm-llmapi-launch pytest wrapper" but the wrapper has been removed from run_cmd. The comment should be updated or removed to avoid confusion.

📝 Proposed fix
     try:
-        # run the command with trtllm-llmapi-launch pytest wrapper
+        # run the evaluation command
         output = subprocess.check_output(run_cmd,
                                          text=True,
                                          stderr=subprocess.STDOUT,
                                          timeout=5400)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/integration/defs/test_e2e.py` around lines 3063 - 3068, Update the
stale comment near the subprocess invocation: remove or replace the phrase
referencing the removed "trtllm-llmapi-launch pytest wrapper" and ensure the
comment reflects the actual behavior of run_cmd (the command being executed)
used with subprocess.check_output; locate the block around run_cmd and the
try/except that calls subprocess.check_output and change the comment to a
concise, accurate description of what run_cmd does (or delete the comment if
redundant).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Outside diff comments:
In `@tests/integration/defs/test_e2e.py`:
- Around line 3063-3068: Update the stale comment near the subprocess
invocation: remove or replace the phrase referencing the removed
"trtllm-llmapi-launch pytest wrapper" and ensure the comment reflects the actual
behavior of run_cmd (the command being executed) used with
subprocess.check_output; locate the block around run_cmd and the try/except that
calls subprocess.check_output and change the comment to a concise, accurate
description of what run_cmd does (or delete the comment if redundant).

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: 5a46c04d-3a34-4bfd-ab36-127e88c57605

📥 Commits

Reviewing files that changed from the base of the PR and between fe9e1a3 and a69fa1e.

📒 Files selected for processing (2)
  • tests/integration/defs/test_e2e.py
  • tests/integration/test_lists/qa/llm_function_multinode.txt
💤 Files with no reviewable changes (1)
  • tests/integration/test_lists/qa/llm_function_multinode.txt

@tensorrt-cicd
Copy link
Collaborator

PR_Github #39033 [ run ] triggered by Bot. Commit: a69fa1e Link to invocation

@tensorrt-cicd
Copy link
Collaborator

PR_Github #39033 [ run ] completed with state FAILURE. Commit: a69fa1e
/LLM/main/L0_MergeRequest_PR pipeline #30309 (Partly Tested) completed with status: 'FAILURE'

CI Report

⚠️ Action Required:

  • Please check the failed tests and fix your PR
  • If you cannot view the failures, ask the CI triggerer to share details
  • Once fixed, request an NVIDIA team member to trigger CI again

Link to invocation

@xinhe-nv
Copy link
Collaborator Author

/bot run --skip-test

1 similar comment
@xinhe-nv
Copy link
Collaborator Author

/bot run --skip-test

@tensorrt-cicd
Copy link
Collaborator

PR_Github #39039 [ run ] triggered by Bot. Commit: 4ab7193 Link to invocation

@tensorrt-cicd
Copy link
Collaborator

PR_Github #39040 [ run ] triggered by Bot. Commit: 4ab7193 Link to invocation

@tensorrt-cicd
Copy link
Collaborator

PR_Github #39039 [ run ] completed with state ABORTED. Commit: 4ab7193

Link to invocation

@tensorrt-cicd
Copy link
Collaborator

PR_Github #39040 [ run ] completed with state FAILURE. Commit: 4ab7193
/LLM/main/L0_MergeRequest_PR pipeline #30314 (Partly Tested) completed with status: 'FAILURE'

CI Report

⚠️ Action Required:

  • Please check the failed tests and fix your PR
  • If you cannot view the failures, ask the CI triggerer to share details
  • Once fixed, request an NVIDIA team member to trigger CI again

Link to invocation

Signed-off-by: Xin He (SW-GPU) <200704525+xinhe-nv@users.noreply.github.com>
Signed-off-by: Xin He (SW-GPU) <200704525+xinhe-nv@users.noreply.github.com>
Signed-off-by: Xin He (SW-GPU) <200704525+xinhe-nv@users.noreply.github.com>
@xinhe-nv
Copy link
Collaborator Author

/bot run --skip-test

@tensorrt-cicd
Copy link
Collaborator

PR_Github #39055 [ run ] triggered by Bot. Commit: 7efc90d Link to invocation

@tensorrt-cicd
Copy link
Collaborator

PR_Github #39081 [ run ] triggered by Bot. Commit: ae4cd38 Link to invocation

@tensorrt-cicd
Copy link
Collaborator

PR_Github #39081 [ run ] completed with state SUCCESS. Commit: ae4cd38
/LLM/main/L0_MergeRequest_PR pipeline #30344 (Partly Tested) completed with status: 'SUCCESS'

CI Report

Link to invocation

@xinhe-nv
Copy link
Collaborator Author

/bot run --skip-test

@tensorrt-cicd
Copy link
Collaborator

PR_Github #39154 [ run ] triggered by Bot. Commit: f9c4e9d Link to invocation

@xinhe-nv
Copy link
Collaborator Author

/bot run --skip-test

@tensorrt-cicd
Copy link
Collaborator

PR_Github #39162 [ run ] triggered by Bot. Commit: cc0498a Link to invocation

@xinhe-nv
Copy link
Collaborator Author

/bot run --skip-test

@tensorrt-cicd
Copy link
Collaborator

PR_Github #39166 [ run ] triggered by Bot. Commit: beb90f3 Link to invocation

@tensorrt-cicd
Copy link
Collaborator

PR_Github #39166 [ run ] completed with state SUCCESS. Commit: beb90f3
/LLM/main/L0_MergeRequest_PR pipeline #30423 (Partly Tested) completed with status: 'FAILURE'

CI Report

⚠️ Action Required:

  • Please check the failed tests and fix your PR
  • If you cannot view the failures, ask the CI triggerer to share details
  • Once fixed, request an NVIDIA team member to trigger CI again

Link to invocation

@xinhe-nv
Copy link
Collaborator Author

/bot reuse-pipeline --number 12075

@tensorrt-cicd
Copy link
Collaborator

PR_Github #39349 [ reuse-pipeline ] triggered by Bot. Commit: 3d83665 Link to invocation

@tensorrt-cicd
Copy link
Collaborator

PR_Github #39349 [ reuse-pipeline ] completed with state SUCCESS. Commit: 3d83665
Can't reuse PR_Github #39166 (Partly Tested) with status: FAILED

Link to invocation

@xinhe-nv
Copy link
Collaborator Author

/bot run --stage-list ""

@tensorrt-cicd
Copy link
Collaborator

PR_Github #39363 [ run ] triggered by Bot. Commit: d01aed1 Link to invocation

@tensorrt-cicd
Copy link
Collaborator

PR_Github #39363 [ run ] completed with state SUCCESS. Commit: d01aed1
/LLM/main/L0_MergeRequest_PR pipeline #30608 (Partly Tested) completed with status: 'SUCCESS'

CI Report

Link to invocation

@xinhe-nv
Copy link
Collaborator Author

xinhe-nv commented Mar 18, 2026

/bot reuse-pipeline

@github-actions
Copy link

GitHub Bot Help

/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...

Provide a user friendly way for developers to interact with a Jenkins server.

Run /bot [-h|--help] to print this help message.

See details below for each supported subcommand.

Details

run [--reuse-test (optional)pipeline-id --disable-fail-fast --skip-test --stage-list "A10-PyTorch-1, xxx" --gpu-type "A30, H100_PCIe" --test-backend "pytorch, cpp" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" --detailed-log --debug(experimental) --high-priority]

Launch build/test pipelines. All previously running jobs will be killed.

--reuse-test (optional)pipeline-id (OPTIONAL) : Allow the new pipeline to reuse build artifacts and skip successful test stages from a specified pipeline or the last pipeline if no pipeline-id is indicated. If the Git commit ID has changed, this option will be always ignored. The DEFAULT behavior of the bot is to reuse build artifacts and successful test results from the last pipeline.

--disable-reuse-test (OPTIONAL) : Explicitly prevent the pipeline from reusing build artifacts and skipping successful test stages from a previous pipeline. Ensure that all builds and tests are run regardless of previous successes.

--disable-fail-fast (OPTIONAL) : Disable fail fast on build/tests/infra failures.

--skip-test (OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.

--stage-list "A10-PyTorch-1, xxx" (OPTIONAL) : Only run the specified test stages. Examples: "A10-PyTorch-1, xxx". Note: Does NOT update GitHub check status.

--gpu-type "A30, H100_PCIe" (OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.

--test-backend "pytorch, cpp" (OPTIONAL) : Skip test stages which don't match the specified backends. Only support [pytorch, cpp, tensorrt, triton]. Examples: "pytorch, cpp" (does not run test stages with tensorrt or triton backend). Note: Does NOT update GitHub pipeline status.

--only-multi-gpu-test (OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.

--disable-multi-gpu-test (OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.

--add-multi-gpu-test (OPTIONAL) : Force run the multi-GPU tests in addition to running L0 pre-merge pipeline.

--post-merge (OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.

--extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" (OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx".

--detailed-log (OPTIONAL) : Enable flushing out all logs to the Jenkins console. This will significantly increase the log volume and may slow down the job.

--debug (OPTIONAL) : Experimental feature. Enable access to the CI container for debugging purpose. Note: Specify exactly one stage in the stage-list parameter to access the appropriate container environment. Note: Does NOT update GitHub check status.

--high-priority (OPTIONAL) : Run the pipeline with high priority. This option is restricted to authorized users only and will route the job to a high-priority queue.

kill

kill

Kill all running builds associated with pull request.

skip

skip --comment COMMENT

Skip testing for latest commit on pull request. --comment "Reason for skipping build/test" is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

reuse-pipeline

reuse-pipeline

Reuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

@xinhe-nv
Copy link
Collaborator Author

/bot reuse-pipeline

@tensorrt-cicd
Copy link
Collaborator

PR_Github #39389 [ reuse-pipeline ] triggered by Bot. Commit: aefebe0 Link to invocation

@tensorrt-cicd
Copy link
Collaborator

PR_Github #39389 [ reuse-pipeline ] completed with state SUCCESS. Commit: aefebe0
Reusing PR_Github #39363 (Partly Tested) for commit aefebe0

Link to invocation

@xinhe-nv xinhe-nv merged commit 0bdb9a8 into NVIDIA:main Mar 18, 2026
5 checks passed
@xinhe-nv xinhe-nv deleted the test branch March 18, 2026 05:59
limin2021 pushed a commit to limin2021/TensorRT-LLM that referenced this pull request Mar 19, 2026
Signed-off-by: Xin He (SW-GPU) <200704525+xinhe-nv@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants