[None][fix] Cap TLLM_BENCHMARK_REQ_QUEUES_SIZE to avoid fill-loop hang#13065
Conversation
📝 WalkthroughWalkthroughModifies the computation of Changes
Estimated code review effort🎯 2 (Simple) | ⏱️ ~8 minutes 🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 inconclusive)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Comment |
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@examples/disaggregated/slurm/benchmark/submit.py`:
- Around line 235-237: The formatting in the block computing max_capacity and
queue_size (variables max_capacity, max_batch_size, tp_size,
enable_attention_dp, queue_size, concurrency) violates yapf/PEP8; run the
project's pre-commit hooks or yapf/ruff formatter to reflow and wrap these
lines, then commit the resulting changes so CI stops reporting
modifications—ensure the logic remains the same (max_capacity = max_batch_size *
tp_size if enable_attention_dp else max_batch_size; queue_size =
min(max_capacity, int(concurrency)); if queue_size < int(concurrency):) while
fixing only formatting.
- Around line 232-237: max_batch_size and related numeric variables are being
mixed with string values (concurrency) causing string repetition and TypeError;
update the queue-size computation by casting concurrency, max_batch_size, and
tensor_parallel_size (tp_size) to int at assignment (e.g., set concurrency =
int(concurrency) and max_batch_size = int(gen_config.get('max_batch_size',
concurrency))) then compute max_capacity = max_batch_size * tp_size if
enable_attention_dp else max_batch_size and queue_size = min(max_capacity,
concurrency); also split or shorten any long expressions so the modified lines
(especially the max_capacity/queue_size calculation) stay within the
80-character line length limit.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro Plus
Run ID: c508a93d-b322-401d-9264-3a11697e212b
📒 Files selected for processing (1)
examples/disaggregated/slurm/benchmark/submit.py
Shixiaowei02
left a comment
There was a problem hiding this comment.
Could you please revise it based on the agent comments? Thanks!
|
/bot run --disable-fail-fast |
|
PR_Github #43419 [ run ] triggered by Bot. Commit: |
|
PR_Github #43419 [ run ] completed with state
|
|
/bot run --disable-fail-fast |
|
/bot run --disable-fail-fast |
|
PR_Github #43732 [ run ] triggered by Bot. Commit: |
|
PR_Github #43732 [ run ] completed with state
|
|
/bot run --disable-fail-fast |
|
PR_Github #43767 [ run ] triggered by Bot. Commit: |
|
PR_Github #43767 [ run ] completed with state
|
|
/bot run --disable-fail-fast |
|
PR_Github #43976 [ run ] triggered by Bot. Commit: |
|
PR_Github #43976 [ run ] completed with state
|
|
/bot run --disable-fail-fast |
|
PR_Github #44202 [ run ] triggered by Bot. Commit: |
|
PR_Github #44202 [ run ] completed with state
|
When attention DP is enabled, the effective max capacity is max_batch_size * tp_size, which can be smaller than the requested concurrency. Setting TLLM_BENCHMARK_REQ_QUEUES_SIZE to concurrency in that case causes the fill loop to hang waiting for slots that will never free. Cap queue_size to min(max_capacity, concurrency) and emit a warning when capped. Signed-off-by: Lizhi Zhou <1432185+reasonsolo@users.noreply.github.com>
Cast concurrency, max_batch_size, and tp_size to int to prevent string arithmetic when concurrency comes from split(). Add explicit parentheses around the ternary for clarity. Run yapf to satisfy pre-commit formatting checks. Signed-off-by: Lizhi Zhou <1432185+reasonsolo@users.noreply.github.com>
ae4ba76 to
9cce9cf
Compare
|
/bot run --disable-fail-fast |
|
PR_Github #44238 [ run ] triggered by Bot. Commit: |
|
PR_Github #44238 [ run ] completed with state
|
|
/bot run --disable-fail-fast |
|
PR_Github #44312 [ run ] triggered by Bot. Commit: |
|
PR_Github #44312 [ run ] completed with state
|
|
/bot run --disable-fail-fast |
|
PR_Github #44557 [ run ] triggered by Bot. Commit: |
|
PR_Github #44557 [ run ] completed with state
|
|
/bot run --disable-fail-fast |
|
PR_Github #44654 [ run ] triggered by Bot. Commit: |
|
PR_Github #44654 [ run ] completed with state |
When attention DP is enabled, the effective max capacity is max_batch_size * tp_size, which can be smaller than the requested concurrency. Setting TLLM_BENCHMARK_REQ_QUEUES_SIZE to concurrency in that case causes the fill loop to hang waiting for slots that will never free. Cap queue_size to min(max_capacity, concurrency) and emit a warning when capped.
Summary by CodeRabbit
Description
Test Coverage
PR Checklist
Please review the following before submitting your PR:
PR description clearly explains what and why. If using CodeRabbit's summary, please make sure it makes sense.
PR Follows TRT-LLM CODING GUIDELINES to the best of your knowledge.
Test cases are provided for new code paths (see test instructions)
Any new dependencies have been scanned for license and vulnerabilities
CODEOWNERS updated if ownership changes
Documentation updated as needed
Update tava architecture diagram if there is a significant design change in PR.
The reviewers assigned automatically/manually are appropriate for the PR.
Please check this after reviewing the above items as appropriate for this PR.
GitHub Bot Help
To see a list of available CI bot commands, please comment
/bot help.