[None][fix] Fixed mamba cache issue for pp>1#12146
Conversation
* Also unwaived most test cases for super v3. Signed-off-by: Wanli Jiang <35160485+Wanli-Jiang@users.noreply.github.com>
|
/bot help |
GitHub Bot Help
Provide a user friendly way for developers to interact with a Jenkins server. Run See details below for each supported subcommand. Details
Launch build/test pipelines. All previously running jobs will be killed.
kill
Kill all running builds associated with pull request. skip
Skip testing for latest commit on pull request. reuse-pipeline
Reuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break. |
|
/bot run --disable-fail-fast |
|
PR_Github #38689 [ run ] triggered by Bot. Commit: |
📝 WalkthroughWalkthroughThe changes update parameter naming in the Mamba cache manager from Changes
Estimated code review effort🎯 2 (Simple) | ⏱️ ~12 minutes 🚥 Pre-merge checks | ✅ 1 | ❌ 2❌ Failed checks (2 warnings)
✅ Passed checks (1 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Comment |
There was a problem hiding this comment.
🧹 Nitpick comments (1)
tests/integration/defs/accuracy/test_llm_api_pytorch.py (1)
5986-6022: Parameterize this PP matrix overuse_cpp_mambatoo.This file now covers
pp_size > 1and it separately coversTRTLLM_USE_CPP_MAMBA, but not in the same test. Since this PR is fixing a PP-specific Mamba cache path, folding the C++ toggle intotest_nvfp4_parallelismwould keep that branch explicitly covered as well.♻️ Example adjustment
+ `@pytest.mark.parametrize`( + "use_cpp_mamba", + [False, True], + ids=["python_mamba_cache", "cpp_mamba_cache"], + ) - def test_nvfp4_parallelism(self, tp_size, ep_size, pp_size, attention_dp): + def test_nvfp4_parallelism(self, tp_size, ep_size, pp_size, attention_dp, + use_cpp_mamba, monkeypatch): + monkeypatch.setenv("TRTLLM_USE_CPP_MAMBA", + "1" if use_cpp_mamba else "0")🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/integration/defs/accuracy/test_llm_api_pytorch.py` around lines 5986 - 6022, Add a boolean param for the C++ mamba toggle to test_nvfp4_parallelism and run the same PP/TP/EP matrix with both values: extend the pytest.mark.parametrize signature to include use_cpp_mamba, update the ids accordingly, and iterate the existing parameter tuples to include True/False for use_cpp_mamba; inside test_nvfp4_parallelism set and restore the TRTLLM_USE_CPP_MAMBA environment flag (or call the equivalent config toggle) before creating the LLM context so the LLM(...) instantiation runs under both C++-mamba enabled and disabled modes.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@tests/integration/defs/accuracy/test_llm_api_pytorch.py`:
- Around line 5986-6022: Add a boolean param for the C++ mamba toggle to
test_nvfp4_parallelism and run the same PP/TP/EP matrix with both values: extend
the pytest.mark.parametrize signature to include use_cpp_mamba, update the ids
accordingly, and iterate the existing parameter tuples to include True/False for
use_cpp_mamba; inside test_nvfp4_parallelism set and restore the
TRTLLM_USE_CPP_MAMBA environment flag (or call the equivalent config toggle)
before creating the LLM context so the LLM(...) instantiation runs under both
C++-mamba enabled and disabled modes.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 844ffcc2-a18f-4f1f-8264-58416e2e2f4f
📒 Files selected for processing (3)
tensorrt_llm/_torch/pyexecutor/mamba_cache_manager.pytests/integration/defs/accuracy/test_llm_api_pytorch.pytests/integration/test_lists/waives.txt
💤 Files with no reviewable changes (1)
- tests/integration/test_lists/waives.txt
|
PR_Github #38689 [ run ] completed with state |
|
/bot run --only-multi-gpu-test --disable-fail-fast |
|
PR_Github #38735 [ run ] triggered by Bot. Commit: |
|
PR_Github #38735 [ run ] completed with state
|
|
/bot run --only-multi-gpu-test --disable-fail-fast --post-merge |
|
PR_Github #38790 [ run ] triggered by Bot. Commit: |
|
PR_Github #38790 [ run ] completed with state
|
|
/bot run --stage-list "DGX_B200-4_GPUs-AutoDeploy-1, DGX_B200-4_GPUs-PyTorch-Ray-1, DGX_H100-4_GPUs-AutoDeploy-1, GB200-8_GPUs-2_Nodes-PyTorch-Disagg-PerfSanity-CTX1-NODE1-GPU1-GEN1-NODE1-GPU2-Post-Merge-2" |
|
/bot run --stage-list "DGX_B200-4_GPUs-AutoDeploy-1, DGX_B200-4_GPUs-PyTorch-Ray-1, DGX_H100-4_GPUs-AutoDeploy-1, GB200-8_GPUs-2_Nodes-PyTorch-Disagg-PerfSanity-CTX1-NODE1-GPU1-GEN1-NODE1-GPU2-Post-Merge-2" --disable-fail-fast |
|
PR_Github #38851 [ run ] triggered by Bot. Commit: |
|
PR_Github #38852 [ run ] triggered by Bot. Commit: |
|
PR_Github #38852 [ run ] completed with state |
|
/bot run --disable-fail-fast |
|
PR_Github #38866 [ run ] triggered by Bot. Commit: |
|
PR_Github #38866 [ run ] completed with state |
Summary by CodeRabbit
Tests
Chores
Description
Test Coverage
PR Checklist
Please review the following before submitting your PR:
PR description clearly explains what and why. If using CodeRabbit's summary, please make sure it makes sense.
PR Follows TRT-LLM CODING GUIDELINES to the best of your knowledge.
Test cases are provided for new code paths (see test instructions)
Any new dependencies have been scanned for license and vulnerabilities
CODEOWNERS updated if ownership changes
Documentation updated as needed
Update tava architecture diagram if there is a significant design change in PR.
The reviewers assigned automatically/manually are appropriate for the PR.
Please check this after reviewing the above items as appropriate for this PR.
GitHub Bot Help
To see a list of available CI bot commands, please comment
/bot help.