-
Notifications
You must be signed in to change notification settings - Fork 2k
[https://nvbugs/5761391][fix] Use correct model names for config database regression tests #10192
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[https://nvbugs/5761391][fix] Use correct model names for config database regression tests #10192
Conversation
f57d456 to
08c0453
Compare
…base regression tests Signed-off-by: Anish Shanbhag <ashanbhag@nvidia.com>
Signed-off-by: Anish Shanbhag <ashanbhag@nvidia.com>
Signed-off-by: Anish Shanbhag <ashanbhag@nvidia.com>
Signed-off-by: Anish Shanbhag <ashanbhag@nvidia.com>
Signed-off-by: Anish Shanbhag <ashanbhag@nvidia.com>
Signed-off-by: Anish Shanbhag <ashanbhag@nvidia.com>
Signed-off-by: Anish Shanbhag <ashanbhag@nvidia.com>
5e0ffed to
e297371
Compare
📝 WalkthroughWalkthroughThis pull request refactors the test framework to map HuggingFace model identifiers to normalized model names, updates test invocations from Changes
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes 🚥 Pre-merge checks | ✅ 1 | ❌ 2❌ Failed checks (2 warnings)
✅ Passed checks (1 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (3)
tests/test_common/http_utils.py (2)
1-1: Consider Python 3.10+ type hint syntax.The
subprocessimport is correctly added to support the new server process monitoring feature.
7-15: Good fail-fast behavior; consider updating type hint.The server process monitoring logic correctly checks if the server has exited before attempting endpoint connections, preventing unnecessary timeout waits. The logic is sound:
poll()returnsNonewhile the process is running and the exit code once it terminates.♻️ Optional: Use PEP 484 compliant type hint
Static analysis suggests using explicit
Noneunion instead of implicitOptional:-def wait_for_endpoint_ready(url: str, timeout: int = 300, server_proc: subprocess.Popen = None): +def wait_for_endpoint_ready(url: str, timeout: int = 300, server_proc: subprocess.Popen | None = None):This follows PEP 484 guidelines for Python 3.10+.
scripts/generate_config_database_tests.py (1)
76-79: Consider the static analysis suggestion (optional).The error handling is clear and appropriate. However, Ruff suggests avoiding long error messages directly in the
raisestatement (TRY003). While including the specific model name is valuable for debugging, you could optionally refactor to use a shorter message or define a custom exception if preferred.Optional refactor (if following strict linting rules)
- model_name = MODEL_NAME_MAPPING.get(recipe.model) - if not model_name: - raise ValueError(f"Model not found in MODEL_NAME_MAPPING: {recipe.model}") + if recipe.model not in MODEL_NAME_MAPPING: + raise ValueError(f"Unknown model: {recipe.model}") + model_name = MODEL_NAME_MAPPING[recipe.model]
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (7)
scripts/generate_config_database_tests.pytests/integration/defs/perf/open_search_db_utils.pytests/integration/defs/perf/test_perf_sanity.pytests/integration/test_lists/qa/llm_config_database.ymltests/scripts/perf-sanity/config_database_b200_nvl.yamltests/scripts/perf-sanity/config_database_h200_sxm.yamltests/test_common/http_utils.py
🧰 Additional context used
📓 Path-based instructions (2)
**/*.py
📄 CodeRabbit inference engine (CODING_GUIDELINES.md)
**/*.py: The code developed for TensorRT-LLM should conform to Python 3.8+
Indent Python code with 4 spaces. Do not use tabs
Always maintain the namespace when importing Python modules, even if only one class or function from a module is used
Python filenames should use snake_case (e.g.,some_file.py)
Python classes should use PascalCase (e.g.,class SomeClass)
Python functions and methods should use snake_case (e.g.,def my_awesome_function():)
Python local variables should use snake_case, with prefixkfor variable names that start with a number (e.g.,k_99th_percentile)
Python global variables should use upper snake_case with prefixG(e.g.,G_MY_GLOBAL)
Python constants should use upper snake_case (e.g.,MY_CONSTANT)
Avoid shadowing variables declared in an outer scope in Python
Initialize all externally visible members of a Python class in the constructor
For Python interfaces that may be used outside a file, prefer docstrings over comments
Use comments in Python for code within a function, or interfaces that are local to a file
Use Google-style docstrings for Python classes and functions, which can be parsed by Sphinx
Python attributes and variables can be documented inline with the format"""<type>: Description"""
Avoid using reflection in Python when functionality can be easily achieved without reflection
When using try-except blocks in Python, limit the except clause to the smallest set of errors possible
When using try-except blocks in Python to handle multiple possible variable types (duck-typing), keep the body of the try as small as possible and use the else block for the main logic
Files:
scripts/generate_config_database_tests.pytests/integration/defs/perf/open_search_db_utils.pytests/test_common/http_utils.pytests/integration/defs/perf/test_perf_sanity.py
**/*.{cpp,cc,cxx,h,hpp,hxx,cu,cuh,py}
📄 CodeRabbit inference engine (CODING_GUIDELINES.md)
All TensorRT-LLM source files (.cpp, .h, .cu, .py, and other source files) should contain an NVIDIA copyright header with the year of latest meaningful modification
Files:
scripts/generate_config_database_tests.pytests/integration/defs/perf/open_search_db_utils.pytests/test_common/http_utils.pytests/integration/defs/perf/test_perf_sanity.py
🧠 Learnings (4)
📚 Learning: 2025-08-26T09:49:04.956Z
Learnt from: pengbowang-nv
Repo: NVIDIA/TensorRT-LLM PR: 7192
File: tests/integration/test_lists/test-db/l0_dgx_b200.yml:56-72
Timestamp: 2025-08-26T09:49:04.956Z
Learning: In TensorRT-LLM test configuration files, the test scheduling system handles wildcard matching with special rules that prevent duplicate test execution even when the same tests appear in multiple yaml files with overlapping GPU wildcards (e.g., "*b200*" and "*gb200*").
Applied to files:
scripts/generate_config_database_tests.pytests/integration/test_lists/qa/llm_config_database.ymltests/integration/defs/perf/test_perf_sanity.py
📚 Learning: 2025-09-09T09:40:45.658Z
Learnt from: fredricz-20070104
Repo: NVIDIA/TensorRT-LLM PR: 7645
File: tests/integration/test_lists/qa/llm_function_core.txt:648-648
Timestamp: 2025-09-09T09:40:45.658Z
Learning: In TensorRT-LLM test lists, it's common and intentional for the same test to appear in multiple test list files when they serve different purposes (e.g., llm_function_core.txt for comprehensive core functionality testing and llm_function_core_sanity.txt for quick sanity checks). This duplication allows tests to be run in different testing contexts.
Applied to files:
scripts/generate_config_database_tests.pytests/integration/test_lists/qa/llm_config_database.yml
📚 Learning: 2025-09-17T02:48:52.732Z
Learnt from: tongyuantongyu
Repo: NVIDIA/TensorRT-LLM PR: 7781
File: tests/integration/test_lists/waives.txt:313-313
Timestamp: 2025-09-17T02:48:52.732Z
Learning: In TensorRT-LLM, `tests/integration/test_lists/waives.txt` is specifically for waiving/skipping tests, while other test list files like those in `test-db/` and `qa/` directories are for different test execution contexts (pre-merge, post-merge, QA tests). The same test appearing in both waives.txt and execution list files is intentional - the test is part of test suites but will be skipped due to the waiver.
Applied to files:
tests/integration/test_lists/qa/llm_config_database.yml
📚 Learning: 2025-08-13T11:07:11.772Z
Learnt from: Funatiq
Repo: NVIDIA/TensorRT-LLM PR: 6754
File: tests/integration/test_lists/test-db/l0_a30.yml:41-47
Timestamp: 2025-08-13T11:07:11.772Z
Learning: In TensorRT-LLM test configuration files like tests/integration/test_lists/test-db/l0_a30.yml, TIMEOUT values are specified in minutes, not seconds.
Applied to files:
tests/integration/test_lists/qa/llm_config_database.yml
🪛 Ruff (0.14.10)
scripts/generate_config_database_tests.py
78-78: Avoid specifying long messages outside the exception class
(TRY003)
tests/test_common/http_utils.py
7-7: PEP 484 prohibits implicit Optional
Convert to T | None
(RUF013)
13-15: Avoid specifying long messages outside the exception class
(TRY003)
🔇 Additional comments (8)
tests/scripts/perf-sanity/config_database_h200_sxm.yaml (1)
3-1383: LGTM! Model name updates are consistent.The model name changes throughout this configuration file are consistent and align with the PR objective. The new naming convention uses snake_case with precision suffixes (e.g.,
deepseek_r1_0528_fp8,gpt_oss_120b_fp4), which appears to be a standardized approach for the config database tests.tests/integration/test_lists/qa/llm_config_database.yml (1)
26-191: LGTM! Test target updates are consistent.The test entries have been systematically updated to use the new
perf/test_perf_sanity.py::test_e2e[aggr_upload-...]format. Since this file is auto-generated (as noted in the header), the changes are consistent and align with the broader test framework refactoring in this PR.tests/scripts/perf-sanity/config_database_b200_nvl.yaml (1)
3-1806: LGTM! Model name updates are consistent.The model name changes are consistent throughout this configuration file and follow the same standardized naming convention as the H200 variant. The updates correctly map HuggingFace-style identifiers to normalized model names with precision suffixes.
tests/integration/defs/perf/open_search_db_utils.py (1)
80-88: Thes_gpu_typefield is properly populated and integrated.The field is assigned via
get_gpu_type()in test_perf_sanity.py and included in the data dictionaries passed to the database. It's already used in match_keys logic and aligns with the supported GPU types (H200, B200) for scenario matching.tests/integration/defs/perf/test_perf_sanity.py (3)
219-231: LGTM! Correctly excludes internal field from public config.Excluding
match_modefromextra_llm_api_config_datais the right approach since this field controls internal matching behavior and should not be exposed in the LLM API configuration.
521-525: Excellent improvement for fail-fast behavior.Passing
server_proctowait_for_endpoint_readyenables monitoring of the server process health during startup. This allows the test to fail immediately if the server crashes rather than waiting for the full timeout period.
1271-1277: LGTM! Correct implementation of scenario vs. config-based matching.The logic correctly differentiates between scenario-based and config-based matching:
- Scenario mode: Uses
SCENARIO_MATCH_FIELDSfor baseline comparison across config variations- Config mode: Adds
s_gpu_typeands_runtimeto match keys for more precise matchingThis aligns with the broader change to support GPU-type-aware scenario matching.
scripts/generate_config_database_tests.py (1)
43-49: LGTM! Clear mapping with good documentation.The
MODEL_NAME_MAPPINGcorrectly maps HuggingFace model identifiers to the model path keys defined inMODEL_PATH_DICT(lines 51-58 of test_perf_sanity.py). All mapped values are present and valid.
|
/bot run --disable-fail-fast |
|
PR_Github #31288 [ run ] triggered by Bot. Commit: |
|
PR_Github #31288 [ run ] completed with state
|
|
/bot run --disable-fail-fast |
|
PR_Github #31307 [ run ] triggered by Bot. Commit: |
|
PR_Github #31307 [ run ] completed with state |
…base regression tests (NVIDIA#10192) Signed-off-by: Anish Shanbhag <ashanbhag@nvidia.com> Signed-off-by: Daniil Kulko <kulkodaniil@gmail.com>
Summary by CodeRabbit
New Features
Improvements
✏️ Tip: You can customize this high-level summary in your review settings.
Description
Test Coverage
PR Checklist
Please review the following before submitting your PR:
PR description clearly explains what and why. If using CodeRabbit's summary, please make sure it makes sense.
PR Follows TRT-LLM CODING GUIDELINES to the best of your knowledge.
Test cases are provided for new code paths (see test instructions)
Any new dependencies have been scanned for license and vulnerabilities
CODEOWNERS updated if ownership changes
Documentation updated as needed
Update tava architecture diagram if there is a significant design change in PR.
The reviewers assigned automatically/manually are appropriate for the PR.
Please check this after reviewing the above items as appropriate for this PR.
GitHub Bot Help
/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...Provide a user friendly way for developers to interact with a Jenkins server.
Run
/bot [-h|--help]to print this help message.See details below for each supported subcommand.
Details
run [--reuse-test (optional)pipeline-id --disable-fail-fast --skip-test --stage-list "A10-PyTorch-1, xxx" --gpu-type "A30, H100_PCIe" --test-backend "pytorch, cpp" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" --detailed-log --debug(experimental)]Launch build/test pipelines. All previously running jobs will be killed.
--reuse-test (optional)pipeline-id(OPTIONAL) : Allow the new pipeline to reuse build artifacts and skip successful test stages from a specified pipeline or the last pipeline if no pipeline-id is indicated. If the Git commit ID has changed, this option will be always ignored. The DEFAULT behavior of the bot is to reuse build artifacts and successful test results from the last pipeline.--disable-reuse-test(OPTIONAL) : Explicitly prevent the pipeline from reusing build artifacts and skipping successful test stages from a previous pipeline. Ensure that all builds and tests are run regardless of previous successes.--disable-fail-fast(OPTIONAL) : Disable fail fast on build/tests/infra failures.--skip-test(OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.--stage-list "A10-PyTorch-1, xxx"(OPTIONAL) : Only run the specified test stages. Examples: "A10-PyTorch-1, xxx". Note: Does NOT update GitHub check status.--gpu-type "A30, H100_PCIe"(OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.--test-backend "pytorch, cpp"(OPTIONAL) : Skip test stages which don't match the specified backends. Only support [pytorch, cpp, tensorrt, triton]. Examples: "pytorch, cpp" (does not run test stages with tensorrt or triton backend). Note: Does NOT update GitHub pipeline status.--only-multi-gpu-test(OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.--disable-multi-gpu-test(OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.--add-multi-gpu-test(OPTIONAL) : Force run the multi-GPU tests in addition to running L0 pre-merge pipeline.--post-merge(OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.--extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx"(OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx".--detailed-log(OPTIONAL) : Enable flushing out all logs to the Jenkins console. This will significantly increase the log volume and may slow down the job.--debug(OPTIONAL) : Experimental feature. Enable access to the CI container for debugging purpose. Note: Specify exactly one stage in thestage-listparameter to access the appropriate container environment. Note: Does NOT update GitHub check status.For guidance on mapping tests to stage names, see
docs/source/reference/ci-overview.mdand the
scripts/test_to_stage_mapping.pyhelper.kill
killKill all running builds associated with pull request.
skip
skip --comment COMMENTSkip testing for latest commit on pull request.
--comment "Reason for skipping build/test"is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.reuse-pipeline
reuse-pipelineReuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.