[https://nvbugs/5685010][fix] Delete test_eagle3_output_repetition_4gpus flaky assertions.#11725
[https://nvbugs/5685010][fix] Delete test_eagle3_output_repetition_4gpus flaky assertions.#11725zheyuf merged 3 commits intoNVIDIA:mainfrom
Conversation
|
/bot run --disable-fail-fast |
📝 WalkthroughWalkthroughTest function renamed from Changes
Estimated code review effort🎯 2 (Simple) | ⏱️ ~10 minutes 🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Comment |
There was a problem hiding this comment.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
tests/integration/defs/test_e2e.py (2)
1-2:⚠️ Potential issue | 🟠 MajorUpdate the copyright year on this modified file.
This file was modified in this PR, but Line 1 still ends at 2024. Please update it to include 2026.
🛠️ Proposed fix
-# SPDX-FileCopyrightText: Copyright (c) 2022-2024 NVIDIA CORPORATION & AFFILIATES. All rights reserved. +# SPDX-FileCopyrightText: Copyright (c) 2022-2026 NVIDIA CORPORATION & AFFILIATES. All rights reserved.As per coding guidelines:
**/*.{py,cpp,h,hpp,cc,cxx,cu,cuh,h,c}: "Include NVIDIA copyright header on ALL new files; update year on modified files."🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/integration/defs/test_e2e.py` around lines 1 - 2, Update the SPDX copyright header in this modified test file by changing the year range on the first line from "2022-2024" to "2022-2026" so the header reflects the file modification; edit the top-of-file comment in tests/integration/defs/test_e2e.py accordingly.
3481-3483:⚠️ Potential issue | 🟡 MinorUse module imports with namespace access in this test.
Lines 3481-3483 import classes directly, which violates the repo’s Python import rule.
♻️ Proposed guideline-compliant refactor
- from tensorrt_llm import LLM, SamplingParams - from tensorrt_llm.llmapi import (CudaGraphConfig, Eagle3DecodingConfig, - KvCacheConfig) + import tensorrt_llm + from tensorrt_llm import llmapi @@ - "kv_cache_config": - KvCacheConfig( + "kv_cache_config": + llmapi.KvCacheConfig( free_gpu_memory_fraction=0.2, enable_block_reuse=False, ), "cuda_graph_config": - CudaGraphConfig(), + llmapi.CudaGraphConfig(), @@ - sampling_params = SamplingParams(max_tokens=1024, temperature=0) + sampling_params = tensorrt_llm.SamplingParams(max_tokens=1024, temperature=0) @@ - spec_config = Eagle3DecodingConfig( + spec_config = llmapi.Eagle3DecodingConfig( max_draft_len=3, speculative_model=eagle_model_dir, eagle3_one_model=True, ) - with LLM(**llm_common_config, speculative_config=spec_config) as llm_spec: + with tensorrt_llm.LLM(**llm_common_config, speculative_config=spec_config) as llm_spec: @@ - with LLM(**llm_common_config) as llm_ref: + with tensorrt_llm.LLM(**llm_common_config) as llm_ref:As per coding guidelines:
**/*.py: "When importing in Python, always maintain the namespace. Import the module, not individual classes or functions."🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/integration/defs/test_e2e.py` around lines 3481 - 3483, Change the direct class imports to module-level imports and update usages to access via the module namespace: replace "from tensorrt_llm import LLM, SamplingParams" and "from tensorrt_llm.llmapi import (CudaGraphConfig, Eagle3DecodingConfig, KvCacheConfig)" with imports of their parent modules (e.g., import tensorrt_llm and import tensorrt_llm.llmapi) and update all references to use tensorrt_llm.LLM, tensorrt_llm.SamplingParams, tensorrt_llm.llmapi.CudaGraphConfig, tensorrt_llm.llmapi.Eagle3DecodingConfig and tensorrt_llm.llmapi.KvCacheConfig so the test follows the repo rule of importing modules rather than individual symbols.
🧹 Nitpick comments (1)
tests/integration/defs/test_e2e.py (1)
3525-3538: Add non-empty output assertions before repetition checks.If generation returns an empty string, the regex assertions on Lines 3535-3538 pass vacuously and can hide a real regression in generation behavior.
✅ Suggested hardening
with LLM(**llm_common_config) as llm_ref: results_ref = llm_ref.generate([prompt], sampling_params) output_ref = results_ref[0].outputs[0].text + assert output_spec.strip(), "Eagle3 output is empty" + assert output_ref.strip(), "Baseline output is empty" + repetitive_pattern = re.compile( r'(.)\1{10,}') # Check for 10+ repeated chars🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/integration/defs/test_e2e.py` around lines 3525 - 3538, Before running the repetitive-pattern assertions, ensure the generated outputs are non-empty by adding explicit assertions that output_spec and output_ref are not empty (e.g., assert output_spec and output_ref or assert output_spec.strip() and output_ref.strip()) so the regex checks on repetitive_pattern (and variables results_spec, output_spec, results_ref, output_ref) cannot pass vacuously; place these non-empty checks immediately after extracting output_spec and output_ref and before the repetitive_pattern.search calls.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Outside diff comments:
In `@tests/integration/defs/test_e2e.py`:
- Around line 1-2: Update the SPDX copyright header in this modified test file
by changing the year range on the first line from "2022-2024" to "2022-2026" so
the header reflects the file modification; edit the top-of-file comment in
tests/integration/defs/test_e2e.py accordingly.
- Around line 3481-3483: Change the direct class imports to module-level imports
and update usages to access via the module namespace: replace "from tensorrt_llm
import LLM, SamplingParams" and "from tensorrt_llm.llmapi import
(CudaGraphConfig, Eagle3DecodingConfig, KvCacheConfig)" with imports of their
parent modules (e.g., import tensorrt_llm and import tensorrt_llm.llmapi) and
update all references to use tensorrt_llm.LLM, tensorrt_llm.SamplingParams,
tensorrt_llm.llmapi.CudaGraphConfig, tensorrt_llm.llmapi.Eagle3DecodingConfig
and tensorrt_llm.llmapi.KvCacheConfig so the test follows the repo rule of
importing modules rather than individual symbols.
---
Nitpick comments:
In `@tests/integration/defs/test_e2e.py`:
- Around line 3525-3538: Before running the repetitive-pattern assertions,
ensure the generated outputs are non-empty by adding explicit assertions that
output_spec and output_ref are not empty (e.g., assert output_spec and
output_ref or assert output_spec.strip() and output_ref.strip()) so the regex
checks on repetitive_pattern (and variables results_spec, output_spec,
results_ref, output_ref) cannot pass vacuously; place these non-empty checks
immediately after extracting output_spec and output_ref and before the
repetitive_pattern.search calls.
ℹ️ Review info
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
tests/integration/defs/test_e2e.pytests/integration/test_lists/waives.txt
💤 Files with no reviewable changes (1)
- tests/integration/test_lists/waives.txt
|
PR_Github #36834 [ run ] triggered by Bot. Commit: |
|
PR_Github #36834 [ run ] completed with state
|
Signed-off-by: Zheyu Fu <zheyuf@NVIDIA.com>
Signed-off-by: Zheyu Fu <zheyuf@NVIDIA.com>
|
/bot run --disable-fail-fast |
|
PR_Github #36998 [ run ] triggered by Bot. Commit: |
|
PR_Github #36998 [ run ] completed with state |
Signed-off-by: Zheyu Fu <zheyuf@nvidia.com>
|
/bot reuse-pipeline |
|
PR_Github #37110 [ reuse-pipeline ] triggered by Bot. Commit: |
|
PR_Github #37110 [ reuse-pipeline ] completed with state |
…pus flaky assertions. (NVIDIA#11725) Signed-off-by: Zheyu Fu <zheyuf@NVIDIA.com>
@coderabbitai summary
Description
Deleting test_eagle3_output_repetition_4gpus flaky assertions according to discussions in https://nvbugspro.nvidia.com/bug/5685010.
In short, the current assertions test the accuracy by comparing the generation length between eagle and non-eagle, which is sub-optimal. We already have more robust accuracy tests by using gsm8k for testing Qwen3-235B-A22B_nvfp4_hf + eagle + 4 gpus here: https://github.com/NVIDIA/TensorRT-LLM/blob/main/tests/integration/defs/accuracy/test_llm_api_pytorch.py#L4230.
Test Coverage
PR Checklist
Please review the following before submitting your PR:
PR description clearly explains what and why. If using CodeRabbit's summary, please make sure it makes sense.
PR Follows TRT-LLM CODING GUIDELINES to the best of your knowledge.
Test cases are provided for new code paths (see test instructions)
Any new dependencies have been scanned for license and vulnerabilities
CODEOWNERS updated if ownership changes
Documentation updated as needed
Update tava architecture diagram if there is a significant design change in PR.
The reviewers assigned automatically/manually are appropriate for the PR.
Please check this after reviewing the above items as appropriate for this PR.
GitHub Bot Help
/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...Provide a user friendly way for developers to interact with a Jenkins server.
Run
/bot [-h|--help]to print this help message.See details below for each supported subcommand.
Details
run [--reuse-test (optional)pipeline-id --disable-fail-fast --skip-test --stage-list "A10-PyTorch-1, xxx" --gpu-type "A30, H100_PCIe" --test-backend "pytorch, cpp" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" --detailed-log --debug(experimental)]Launch build/test pipelines. All previously running jobs will be killed.
--reuse-test (optional)pipeline-id(OPTIONAL) : Allow the new pipeline to reuse build artifacts and skip successful test stages from a specified pipeline or the last pipeline if no pipeline-id is indicated. If the Git commit ID has changed, this option will be always ignored. The DEFAULT behavior of the bot is to reuse build artifacts and successful test results from the last pipeline.--disable-reuse-test(OPTIONAL) : Explicitly prevent the pipeline from reusing build artifacts and skipping successful test stages from a previous pipeline. Ensure that all builds and tests are run regardless of previous successes.--disable-fail-fast(OPTIONAL) : Disable fail fast on build/tests/infra failures.--skip-test(OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.--stage-list "A10-PyTorch-1, xxx"(OPTIONAL) : Only run the specified test stages. Examples: "A10-PyTorch-1, xxx". Note: Does NOT update GitHub check status.--gpu-type "A30, H100_PCIe"(OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.--test-backend "pytorch, cpp"(OPTIONAL) : Skip test stages which don't match the specified backends. Only support [pytorch, cpp, tensorrt, triton]. Examples: "pytorch, cpp" (does not run test stages with tensorrt or triton backend). Note: Does NOT update GitHub pipeline status.--only-multi-gpu-test(OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.--disable-multi-gpu-test(OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.--add-multi-gpu-test(OPTIONAL) : Force run the multi-GPU tests in addition to running L0 pre-merge pipeline.--post-merge(OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.--extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx"(OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx".--detailed-log(OPTIONAL) : Enable flushing out all logs to the Jenkins console. This will significantly increase the log volume and may slow down the job.--debug(OPTIONAL) : Experimental feature. Enable access to the CI container for debugging purpose. Note: Specify exactly one stage in thestage-listparameter to access the appropriate container environment. Note: Does NOT update GitHub check status.For guidance on mapping tests to stage names, see
docs/source/developer-guide/ci-overview.mdand the
scripts/test_to_stage_mapping.pyhelper.kill
killKill all running builds associated with pull request.
skip
skip --comment COMMENTSkip testing for latest commit on pull request.
--comment "Reason for skipping build/test"is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.reuse-pipeline
reuse-pipelineReuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.