[TRTLLM-12365][ci] Dedup AutoDeploy unit tests on B200#13593
[TRTLLM-12365][ci] Dedup AutoDeploy unit tests on B200#13593QiJune merged 4 commits intoNVIDIA:mainfrom
Conversation
Signed-off-by: junq <22017000+QiJune@users.noreply.github.com>
📝 WalkthroughWalkthroughThe changes modify two autodeploy YAML test configuration files, narrowing broad directory-level test suite selections to explicitly listed test modules and specific test cases, removing redundant coverage. Changes
Estimated code review effort🎯 2 (Simple) | ⏱️ ~8 minutes 🚥 Pre-merge checks | ✅ 5✅ Passed checks (5 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Review rate limit: 9/10 reviews remaining, refill in 6 minutes. Comment |
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@tests/integration/test_lists/test-db/l0_dgx_b200.yml`:
- Around line 325-329: The YAML currently lists only test_moe_tp_shard_nvfp4
which drops BF16 coverage; update the test-db entry to include both
test_moe_tp_shard_nvfp4 and test_moe_tp_shard_bf16 (both tests in
test_tp_sharding.py under the fp4_compatible() gate) so BF16 MoE TP sharding
remains on B200, and remove any unrelated QA-list edits in
tests/integration/test_lists/qa/ since this PR only retunes unittest scheduling.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Enterprise
Run ID: c17e08f0-cb54-4959-a62a-1b94a3a50390
📒 Files selected for processing (2)
tests/integration/test_lists/test-db/l0_b200.ymltests/integration/test_lists/test-db/l0_dgx_b200.yml
lucaslie
left a comment
There was a problem hiding this comment.
Thanks for tackling the B-tier dedup — the savings (~40 min/PR, especially the multigpu line) are very welcome.
I cross-checked the explicit list against every fp4_compatible() / @skip_pre_blackwell / is_sm_100f() gated test in tests/unittest/auto_deploy/, and every strictly-gated NVFP4 / FP4 / TRTLLM-Gen test is preserved by the files / parametrize filter you kept. That part of the dedup looks correct to me.
A few specific places where I think we may lose Blackwell-only signal — left as inline comments. Net summary:
- (blocking-ish)
test_triton_paged_attention.py::TestSDPADispatch::test_large_head_dim_forces_sdpais explicitly a regression test for a Blackwell-only Tritontl.dotshared-memory bug — Hopper coverage doesn't substitute. (See inline comment onl0_b200.yml:323.) - (non-blocking) Reducing
test_ad_build_small_single.pyto just the FP8 variant gives up the broadest smoke-level signal we have on B200. The same file's source explicitly notes a pasttrtllmattention failure on B200, so smoke runs there have caught Blackwell-only kernel issues before. (See inline onl0_b200.yml:324.) - (low priority / FYI) Dropping all of
multigpu/transformations/library/excepttest_moe_tp_shard_nvfp4also dropstest_ad_allreduce_strategies.pyandtest_allreduce_residual_rmsnorm_fusion.py. The kernels themselves are exercised bymultigpu/custom_ops(kept), but the strategy-selection / NVLS path is not. (See inline onl0_dgx_b200.yml:328.)
Also flagging that the CodeRabbit suggestion to add test_moe_tp_shard_bf16 is incorrect — that test has no fp4_compatible() gate, it's HW-agnostic, and Hopper covers it. You're right to leave it off.
- Add TestSDPADispatch on B-tier (contains test_large_head_dim_forces_sdpa, a regression test for Blackwell tl.dot misaligned shared memory accesses that does not reproduce on Hopper). - Broaden the smoke filter to also include Nemotron-Nano-3-30B-A3.5B-dev and Llama-4-Scout so the (attention x compile-backend) combinatorial smoke coverage on B-tier is preserved. - Add test_allreduce_residual_rmsnorm_fusion -k "strategy_auto" on l0_dgx_b200 to cover AUTO -> NVLS dispatch on B-tier, which is structurally different from the H100 path. Signed-off-by: junq <22017000+QiJune@users.noreply.github.com>
|
/bot skip --comment "trivial changes" |
|
PR_Github #46461 [ skip ] triggered by Bot. Commit: |
Hi @suyoggupta, the short answer is yes! A couple of guiding principles we're following:
For visibility, we are also actively working on deduplicating test cases(#13584, #13366), and will do more of this in the future. |
|
PR_Github #46461 [ skip ] completed with state |
AutoDeploy unit tests under
tests/unittest/auto_deploy/{singlegpu,multigpu}/were scheduled as full directory entries on B-tier pre-merge stages. Most tests are either HW-agnostic (graph transforms, model patches, RoPE/RMSNorm fusion, KV cache) or pure FP8 (Hopper covers natively). Only thefp4_compatible()/@skip_pre_blackwell/ TRTLLM-Gen tests need B-tier.Replace the directory entries with explicit file lists / parametrize filters keeping only NVFP4 / FP4 / TRTLLM-Gen tests. Hopper (
l0_h100.yml,l0_dgx_h100.yml) and A30 (l0_a30.yml) still run the full directories, so no coverage gap.Per-PR B-tier savings (past-7d OS averages):
singlegpu/smokesinglegpu/modelssinglegpu/transformationssinglegpu/custom_opsmultigpu/transformationsThe
multigpu/transformationsline is particularly valuable given recent GB200 queue contention.Summary by CodeRabbit
Description
Test Coverage
PR Checklist
Please review the following before submitting your PR:
PR description clearly explains what and why. If using CodeRabbit's summary, please make sure it makes sense.
PR Follows TRT-LLM CODING GUIDELINES to the best of your knowledge.
Test cases are provided for new code paths (see test instructions)
Any new dependencies have been scanned for license and vulnerabilities
CODEOWNERS updated if ownership changes
Documentation updated as needed
Update tava architecture diagram if there is a significant design change in PR.
The reviewers assigned automatically/manually are appropriate for the PR.
Please check this after reviewing the above items as appropriate for this PR.
GitHub Bot Help
To see a list of available CI bot commands, please comment
/bot help.