Skip to content

[None][test] rename test case and add fallback for multinode cases#13537

Merged
ruodil merged 3 commits intoNVIDIA:mainfrom
ruodil:user/ruodil/perf
May 5, 2026
Merged

[None][test] rename test case and add fallback for multinode cases#13537
ruodil merged 3 commits intoNVIDIA:mainfrom
ruodil:user/ruodil/perf

Conversation

@ruodil
Copy link
Copy Markdown
Collaborator

@ruodil ruodil commented Apr 28, 2026

Summary by CodeRabbit

  • Tests
    • Enhanced GPU availability validation for multi-node performance test configurations
    • Updated MiniMax M2.5 model identifier to FP8 precision variant
    • Modified performance benchmarks to use FP8 quantization instead of BF16

Description

Test Coverage

PR Checklist

Please review the following before submitting your PR:

  • PR description clearly explains what and why. If using CodeRabbit's summary, please make sure it makes sense.

  • PR Follows TRT-LLM CODING GUIDELINES to the best of your knowledge.

  • Test cases are provided for new code paths (see test instructions)

  • Any new dependencies have been scanned for license and vulnerabilities

  • CODEOWNERS updated if ownership changes

  • Documentation updated as needed

  • Update tava architecture diagram if there is a significant design change in PR.

  • The reviewers assigned automatically/manually are appropriate for the PR.

  • Please check this after reviewing the above items as appropriate for this PR.

GitHub Bot Help

To see a list of available CI bot commands, please comment /bot help.

@ruodil ruodil requested a review from JennyLiu-nv April 28, 2026 01:54
@ruodil ruodil requested review from a team as code owners April 28, 2026 01:54
@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Apr 28, 2026

📝 Walkthrough

Walkthrough

This pull request updates performance test infrastructure to use MiniMax M2.5 FP8 precision model variant instead of BF16, and modifies GPU availability validation in test configuration to support multi-node SLURM cluster setups by computing total GPUs across all nodes.

Changes

Cohort / File(s) Summary
Performance Test Config & GPU Validation
tests/integration/defs/perf/test_perf.py
Updates model mapping from minimax_m2.5 to minimax_m2.5_fp8 identifier; modifies PerfTestConfig.validate() to compute cluster-wide GPU totals from SLURM environment variables (SLURM_NNODES and SLURM_GPUS_PER_NODE) instead of relying solely on local device count, and reports detailed local/cluster GPU accounting in skip messages.
Performance Test List
tests/integration/test_lists/qa/llm_perf_core.yml
Replaces all occurrences of MiniMax M2.5 BF16 test variant (minimax_m2.5-bench-pytorch-bfloat16-bench) with FP8 variant (minimax_m2.5_fp8-bench-pytorch-float8-bench) across 4-GPU and 8-GPU test configurations while preserving input/output shape and latency/throughput annotation coverage.

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

🚥 Pre-merge checks | ✅ 3 | ❌ 2

❌ Failed checks (1 warning, 1 inconclusive)

Check name Status Explanation Resolution
Description check ⚠️ Warning The PR description is entirely empty—it contains only the template with commented sections and an uncompleted checklist, with no actual implementation details, rationale, or test coverage explanation provided by the author. Fill in the Description and Test Coverage sections with clear explanations of what changed (minimax model BF16→FP8 update, multinode GPU validation logic) and which tests validate these changes. Check off the PR checklist items appropriately after review.
Title check ❓ Inconclusive The title is partially related to the changeset. It mentions 'rename test case' which aligns with the model identifier change (minimax_m2.5 → minimax_m2.5_fp8), but it does not clearly convey the main changes: updating model precision from BF16 to FP8 and implementing GPU cluster-aware validation logic for multinode runs. Clarify the title to better reflect the primary changes: consider something like '[None][test] Update minimax model to FP8 and add multinode GPU validation' to more accurately summarize both the model change and the multinode fallback logic.
✅ Passed checks (3 passed)
Check name Status Explanation
Docstring Coverage ✅ Passed Docstring coverage is 100.00% which is sufficient. The required threshold is 80.00%.
Linked Issues check ✅ Passed Check skipped because no linked issues were found for this pull request.
Out of Scope Changes check ✅ Passed Check skipped because no linked issues were found for this pull request.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
tests/integration/defs/perf/test_perf.py (1)

1-1: ⚠️ Potential issue | 🟠 Major

Update the SPDX copyright year for this modified Python file.

Line 1 still ends at 2025, but this file is modified in 2026 and should be updated accordingly.

Proposed fix
-# SPDX-FileCopyrightText: Copyright (c) 2022-2025 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+# SPDX-FileCopyrightText: Copyright (c) 2022-2026 NVIDIA CORPORATION & AFFILIATES. All rights reserved.

As per coding guidelines: "Include NVIDIA copyright header on all new files; update year on modified files".

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/integration/defs/perf/test_perf.py` at line 1, Update the SPDX
copyright year in the file header: change the top-of-file SPDX comment that
currently ends with "2025" to "2026" so the modified Python file reflects the
correct copyright year; locate the SPDX line at the very top of the file (the
SPDX-FileCopyrightText comment) and edit the year range accordingly.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@tests/integration/test_lists/qa/llm_perf_core.yml`:
- Around line 227-234: The QA perf list added multiple minimax_m2.5_fp8 test
entries (e.g.
perf/test_perf.py::test_perf[minimax_m2.5_fp8-bench-pytorch-float8-...]) but the
authoritative pre-merge CI list (l0_perf) was not updated; add the same
minimax_m2.5_fp8 entries present in qa/ (all variants shown in the diff:
different input_output_len, maxbs, reqs, con, tp/gpus) into the l0_perf.yml
test-db list so the tests run in pre-merge CI, ensuring the exact test
identifiers are copied so CI picks them up.

---

Outside diff comments:
In `@tests/integration/defs/perf/test_perf.py`:
- Line 1: Update the SPDX copyright year in the file header: change the
top-of-file SPDX comment that currently ends with "2025" to "2026" so the
modified Python file reflects the correct copyright year; locate the SPDX line
at the very top of the file (the SPDX-FileCopyrightText comment) and edit the
year range accordingly.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Enterprise

Run ID: a8054059-6ffe-4ae4-9d2b-ee266925a70c

📥 Commits

Reviewing files that changed from the base of the PR and between be1f6f5 and 6351d6c.

📒 Files selected for processing (2)
  • tests/integration/defs/perf/test_perf.py
  • tests/integration/test_lists/qa/llm_perf_core.yml

Comment thread tests/integration/test_lists/qa/llm_perf_core.yml
@ruodil
Copy link
Copy Markdown
Collaborator Author

ruodil commented Apr 28, 2026

/bot run

@tensorrt-cicd
Copy link
Copy Markdown
Collaborator

PR_Github #45892 [ run ] triggered by Bot. Commit: 6351d6c Link to invocation

@tensorrt-cicd
Copy link
Copy Markdown
Collaborator

PR_Github #45892 [ run ] completed with state FAILURE. Commit: 6351d6c
/LLM/main/L0_MergeRequest_PR pipeline #36060 completed with status: 'FAILURE'

CI Report

⚠️ Action Required:

  • Please check the failed tests and fix your PR
  • If you cannot view the failures, ask the CI triggerer to share details
  • Once fixed, request an NVIDIA team member to trigger CI again

Link to invocation

@ruodil ruodil enabled auto-merge (squash) April 29, 2026 03:00
@ruodil ruodil force-pushed the user/ruodil/perf branch from c391358 to a594f58 Compare April 29, 2026 06:32
@ruodil
Copy link
Copy Markdown
Collaborator Author

ruodil commented Apr 30, 2026

/bot run

@tensorrt-cicd
Copy link
Copy Markdown
Collaborator

PR_Github #46279 [ run ] triggered by Bot. Commit: 4504f0b Link to invocation

@tensorrt-cicd
Copy link
Copy Markdown
Collaborator

PR_Github #46279 [ run ] completed with state SUCCESS. Commit: 4504f0b
/LLM/main/L0_MergeRequest_PR pipeline #36384 completed with status: 'SUCCESS'
Pipeline passed with automatic retried tests. Check the rerun report for details.

CI Report

Link to invocation

ruodil added 2 commits May 5, 2026 09:53
Signed-off-by: Ruodi Lu <ruodil@users.noreply.github.com>
GLM-5 tokenizer_config.json sets tokenizer_class=TokenizersBackend
which AutoTokenizer cannot import without trust_remote_code=True,
causing trtllm-bench prepare-dataset to fail with a pydantic
ValidationError for all glm_5_fp8 perf cases.

Signed-off-by: Ruodi Lu <ruodil@users.noreply.github.com>
@xinhe-nv xinhe-nv force-pushed the user/ruodil/perf branch from 623e349 to 86f2b38 Compare May 5, 2026 01:53
@ruodil
Copy link
Copy Markdown
Collaborator Author

ruodil commented May 5, 2026

/bot --reuse-pipeline

@github-actions
Copy link
Copy Markdown

github-actions Bot commented May 5, 2026

GitHub Bot Help

/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...

Provide a user friendly way for developers to interact with a Jenkins server.

Run /bot [-h|--help] to print this help message.

See details below for each supported subcommand.

Details

run [--reuse-test (optional)pipeline-id --disable-fail-fast --skip-test --stage-list "A10-PyTorch-1, xxx" --gpu-type "A30, H100_PCIe" --test-backend "pytorch, cpp" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" --detailed-log --debug(experimental) --high-priority]

Launch build/test pipelines. All previously running jobs will be killed.

--reuse-test (optional)pipeline-id (OPTIONAL) : Allow the new pipeline to reuse build artifacts and skip successful test stages from a specified pipeline or the last pipeline if no pipeline-id is indicated. If the Git commit ID has changed, this option will be always ignored. The DEFAULT behavior of the bot is to reuse build artifacts and successful test results from the last pipeline.

--disable-reuse-test (OPTIONAL) : Explicitly prevent the pipeline from reusing build artifacts and skipping successful test stages from a previous pipeline. Ensure that all builds and tests are run regardless of previous successes.

--disable-fail-fast (OPTIONAL) : Disable fail fast on build/tests/infra failures.

--skip-test (OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.

--stage-list "A10-PyTorch-1, xxx" (OPTIONAL) : Only run the specified test stages. Supports wildcard * for pattern matching (e.g., "*PerfSanity*" matches all stages containing PerfSanity). Examples: "A10-PyTorch-1, xxx", "PerfSanity". Note: Does NOT update GitHub check status.

--gpu-type "A30, H100_PCIe" (OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.

--test-backend "pytorch, cpp" (OPTIONAL) : Skip test stages which don't match the specified backends. Only support [pytorch, cpp, tensorrt, triton]. Examples: "pytorch, cpp" (does not run test stages with tensorrt or triton backend). Note: Does NOT update GitHub pipeline status.

--only-multi-gpu-test (OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.

--disable-multi-gpu-test (OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.

--add-multi-gpu-test (OPTIONAL) : Force run the multi-GPU tests in addition to running L0 pre-merge pipeline.

--post-merge (OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.

--extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" (OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Supports wildcard * for pattern matching. Examples: --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx", --extra-stage "Post-Merge".

--detailed-log (OPTIONAL) : Enable flushing out all logs to the Jenkins console. This will significantly increase the log volume and may slow down the job.

--debug (OPTIONAL) : Experimental feature. Enable access to the CI container for debugging purpose. Note: Specify exactly one stage in the stage-list parameter to access the appropriate container environment. Note: Does NOT update GitHub check status.

--high-priority (OPTIONAL) : Run the pipeline with high priority. This option is restricted to authorized users only and will route the job to a high-priority queue.

kill

kill

Kill all running builds associated with pull request.

skip

skip --comment COMMENT

Skip testing for latest commit on pull request. --comment "Reason for skipping build/test" is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

reuse-pipeline

reuse-pipeline

Reuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

@ruodil
Copy link
Copy Markdown
Collaborator Author

ruodil commented May 5, 2026

/bot reuse-pipeline

@tensorrt-cicd
Copy link
Copy Markdown
Collaborator

PR_Github #46729 [ reuse-pipeline ] triggered by Bot. Commit: 488f46f Link to invocation

@tensorrt-cicd
Copy link
Copy Markdown
Collaborator

PR_Github #46729 [ reuse-pipeline ] completed with state SUCCESS. Commit: 488f46f
Reusing PR_Github #46279 for commit 488f46f

Link to invocation

@ruodil ruodil merged commit bc28803 into NVIDIA:main May 5, 2026
6 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants