Skip to content

[None][fix] Cap TLLM_BENCHMARK_REQ_QUEUES_SIZE to avoid fill-loop hang#13065

Merged
reasonsolo merged 2 commits intoNVIDIA:mainfrom
reasonsolo:fix-disagg-bench-queue-size-cap
Apr 21, 2026
Merged

[None][fix] Cap TLLM_BENCHMARK_REQ_QUEUES_SIZE to avoid fill-loop hang#13065
reasonsolo merged 2 commits intoNVIDIA:mainfrom
reasonsolo:fix-disagg-bench-queue-size-cap

Conversation

@reasonsolo
Copy link
Copy Markdown
Collaborator

@reasonsolo reasonsolo commented Apr 15, 2026

When attention DP is enabled, the effective max capacity is max_batch_size * tp_size, which can be smaller than the requested concurrency. Setting TLLM_BENCHMARK_REQ_QUEUES_SIZE to concurrency in that case causes the fill loop to hang waiting for slots that will never free. Cap queue_size to min(max_capacity, concurrency) and emit a warning when capped.

Summary by CodeRabbit

  • Bug Fixes
    • Improved queue size configuration for generation-only benchmarks. Queue sizes are now dynamically computed and capped based on batch size, tensor parallelization, and attention configuration parameters. Added warnings when concurrency values exceed the computed queue size limits to help with configuration optimization.

Description

Test Coverage

PR Checklist

Please review the following before submitting your PR:

  • PR description clearly explains what and why. If using CodeRabbit's summary, please make sure it makes sense.

  • PR Follows TRT-LLM CODING GUIDELINES to the best of your knowledge.

  • Test cases are provided for new code paths (see test instructions)

  • Any new dependencies have been scanned for license and vulnerabilities

  • CODEOWNERS updated if ownership changes

  • Documentation updated as needed

  • Update tava architecture diagram if there is a significant design change in PR.

  • The reviewers assigned automatically/manually are appropriate for the PR.

  • Please check this after reviewing the above items as appropriate for this PR.

GitHub Bot Help

To see a list of available CI bot commands, please comment /bot help.

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Apr 15, 2026

📝 Walkthrough

Walkthrough

Modifies the computation of TLLM_BENCHMARK_REQ_QUEUES_SIZE in the GEN worker configuration during gen_only benchmark mode. Instead of using raw concurrency, the code now calculates a capped queue size based on batch size, tensor parallelism, and attention distribution policy, with a warning if the computed value is lower than requested concurrency.

Changes

Cohort / File(s) Summary
Request Queue Size Configuration
examples/disaggregated/slurm/benchmark/submit.py
Modified computation of TLLM_BENCHMARK_REQ_QUEUES_SIZE to derive a capped value from batch configuration parameters instead of using raw concurrency value; added warning message when capped value is less than configured concurrency.

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~8 minutes

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 inconclusive)

Check name Status Explanation Resolution
Description check ❓ Inconclusive The description provides a clear explanation of the issue and solution but lacks required sections: Test Coverage and explicit Description section are missing or incomplete. Add a dedicated Description section explaining the issue/solution and a Test Coverage section listing relevant tests that safeguard these changes.
✅ Passed checks (2 passed)
Check name Status Explanation
Title check ✅ Passed The title is fully related to the main change: capping TLLM_BENCHMARK_REQ_QUEUES_SIZE to avoid fill-loop hang in benchmark queue configuration.
Docstring Coverage ✅ Passed Docstring coverage is 100.00% which is sufficient. The required threshold is 80.00%.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@examples/disaggregated/slurm/benchmark/submit.py`:
- Around line 235-237: The formatting in the block computing max_capacity and
queue_size (variables max_capacity, max_batch_size, tp_size,
enable_attention_dp, queue_size, concurrency) violates yapf/PEP8; run the
project's pre-commit hooks or yapf/ruff formatter to reflow and wrap these
lines, then commit the resulting changes so CI stops reporting
modifications—ensure the logic remains the same (max_capacity = max_batch_size *
tp_size if enable_attention_dp else max_batch_size; queue_size =
min(max_capacity, int(concurrency)); if queue_size < int(concurrency):) while
fixing only formatting.
- Around line 232-237: max_batch_size and related numeric variables are being
mixed with string values (concurrency) causing string repetition and TypeError;
update the queue-size computation by casting concurrency, max_batch_size, and
tensor_parallel_size (tp_size) to int at assignment (e.g., set concurrency =
int(concurrency) and max_batch_size = int(gen_config.get('max_batch_size',
concurrency))) then compute max_capacity = max_batch_size * tp_size if
enable_attention_dp else max_batch_size and queue_size = min(max_capacity,
concurrency); also split or shorten any long expressions so the modified lines
(especially the max_capacity/queue_size calculation) stay within the
80-character line length limit.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro Plus

Run ID: c508a93d-b322-401d-9264-3a11697e212b

📥 Commits

Reviewing files that changed from the base of the PR and between d09ed1e and f88d61f.

📒 Files selected for processing (1)
  • examples/disaggregated/slurm/benchmark/submit.py

Comment thread examples/disaggregated/slurm/benchmark/submit.py Outdated
Comment thread examples/disaggregated/slurm/benchmark/submit.py Outdated
Copy link
Copy Markdown
Collaborator

@Shixiaowei02 Shixiaowei02 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you please revise it based on the agent comments? Thanks!

@reasonsolo
Copy link
Copy Markdown
Collaborator Author

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Copy Markdown
Collaborator

PR_Github #43419 [ run ] triggered by Bot. Commit: ae4ba76 Link to invocation

@tensorrt-cicd
Copy link
Copy Markdown
Collaborator

PR_Github #43419 [ run ] completed with state SUCCESS. Commit: ae4ba76
/LLM/main/L0_MergeRequest_PR pipeline #33951 completed with status: 'FAILURE'

CI Report

⚠️ Action Required:

  • Please check the failed tests and fix your PR
  • If you cannot view the failures, ask the CI triggerer to share details
  • Once fixed, request an NVIDIA team member to trigger CI again

Link to invocation

@reasonsolo
Copy link
Copy Markdown
Collaborator Author

/bot run --disable-fail-fast

@reasonsolo reasonsolo enabled auto-merge (squash) April 16, 2026 08:29
@reasonsolo
Copy link
Copy Markdown
Collaborator Author

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Copy Markdown
Collaborator

PR_Github #43732 [ run ] triggered by Bot. Commit: ae4ba76 Link to invocation

@tensorrt-cicd
Copy link
Copy Markdown
Collaborator

PR_Github #43732 [ run ] completed with state FAILURE. Commit: ae4ba76
/LLM/main/L0_MergeRequest_PR pipeline #34214 completed with status: 'FAILURE'

CI Report

⚠️ Action Required:

  • Please check the failed tests and fix your PR
  • If you cannot view the failures, ask the CI triggerer to share details
  • Once fixed, request an NVIDIA team member to trigger CI again

Link to invocation

@reasonsolo
Copy link
Copy Markdown
Collaborator Author

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Copy Markdown
Collaborator

PR_Github #43767 [ run ] triggered by Bot. Commit: ae4ba76 Link to invocation

@tensorrt-cicd
Copy link
Copy Markdown
Collaborator

PR_Github #43767 [ run ] completed with state FAILURE. Commit: ae4ba76
/LLM/main/L0_MergeRequest_PR pipeline #34248 completed with status: 'FAILURE'

CI Report

⚠️ Action Required:

  • Please check the failed tests and fix your PR
  • If you cannot view the failures, ask the CI triggerer to share details
  • Once fixed, request an NVIDIA team member to trigger CI again

Link to invocation

@reasonsolo
Copy link
Copy Markdown
Collaborator Author

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Copy Markdown
Collaborator

PR_Github #43976 [ run ] triggered by Bot. Commit: ae4ba76 Link to invocation

@tensorrt-cicd
Copy link
Copy Markdown
Collaborator

PR_Github #43976 [ run ] completed with state SUCCESS. Commit: ae4ba76
/LLM/main/L0_MergeRequest_PR pipeline #34417 completed with status: 'FAILURE'

CI Report

⚠️ Action Required:

  • Please check the failed tests and fix your PR
  • If you cannot view the failures, ask the CI triggerer to share details
  • Once fixed, request an NVIDIA team member to trigger CI again

Link to invocation

@reasonsolo
Copy link
Copy Markdown
Collaborator Author

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Copy Markdown
Collaborator

PR_Github #44202 [ run ] triggered by Bot. Commit: ae4ba76 Link to invocation

@tensorrt-cicd
Copy link
Copy Markdown
Collaborator

PR_Github #44202 [ run ] completed with state SUCCESS. Commit: ae4ba76
/LLM/main/L0_MergeRequest_PR pipeline #34627 completed with status: 'FAILURE'

CI Report

⚠️ Action Required:

  • Please check the failed tests and fix your PR
  • If you cannot view the failures, ask the CI triggerer to share details
  • Once fixed, request an NVIDIA team member to trigger CI again

Link to invocation

When attention DP is enabled, the effective max capacity is
max_batch_size * tp_size, which can be smaller than the requested
concurrency. Setting TLLM_BENCHMARK_REQ_QUEUES_SIZE to concurrency
in that case causes the fill loop to hang waiting for slots that
will never free. Cap queue_size to min(max_capacity, concurrency)
and emit a warning when capped.

Signed-off-by: Lizhi Zhou <1432185+reasonsolo@users.noreply.github.com>
Cast concurrency, max_batch_size, and tp_size to int to prevent
string arithmetic when concurrency comes from split(). Add explicit
parentheses around the ternary for clarity. Run yapf to satisfy
pre-commit formatting checks.

Signed-off-by: Lizhi Zhou <1432185+reasonsolo@users.noreply.github.com>
@reasonsolo reasonsolo force-pushed the fix-disagg-bench-queue-size-cap branch from ae4ba76 to 9cce9cf Compare April 20, 2026 01:37
@reasonsolo
Copy link
Copy Markdown
Collaborator Author

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Copy Markdown
Collaborator

PR_Github #44238 [ run ] triggered by Bot. Commit: 9cce9cf Link to invocation

@tensorrt-cicd
Copy link
Copy Markdown
Collaborator

PR_Github #44238 [ run ] completed with state SUCCESS. Commit: 9cce9cf
/LLM/main/L0_MergeRequest_PR pipeline #34660 completed with status: 'FAILURE'

CI Report

⚠️ Action Required:

  • Please check the failed tests and fix your PR
  • If you cannot view the failures, ask the CI triggerer to share details
  • Once fixed, request an NVIDIA team member to trigger CI again

Link to invocation

@reasonsolo
Copy link
Copy Markdown
Collaborator Author

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Copy Markdown
Collaborator

PR_Github #44312 [ run ] triggered by Bot. Commit: 9cce9cf Link to invocation

@tensorrt-cicd
Copy link
Copy Markdown
Collaborator

PR_Github #44312 [ run ] completed with state FAILURE. Commit: 9cce9cf
/LLM/main/L0_MergeRequest_PR pipeline #34732 completed with status: 'FAILURE'

CI Report

⚠️ Action Required:

  • Please check the failed tests and fix your PR
  • If you cannot view the failures, ask the CI triggerer to share details
  • Once fixed, request an NVIDIA team member to trigger CI again

Link to invocation

@reasonsolo
Copy link
Copy Markdown
Collaborator Author

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Copy Markdown
Collaborator

PR_Github #44557 [ run ] triggered by Bot. Commit: 9cce9cf Link to invocation

@tensorrt-cicd
Copy link
Copy Markdown
Collaborator

PR_Github #44557 [ run ] completed with state SUCCESS. Commit: 9cce9cf
/LLM/main/L0_MergeRequest_PR pipeline #34947 completed with status: 'FAILURE'

CI Report

⚠️ Action Required:

  • Please check the failed tests and fix your PR
  • If you cannot view the failures, ask the CI triggerer to share details
  • Once fixed, request an NVIDIA team member to trigger CI again

Link to invocation

@reasonsolo
Copy link
Copy Markdown
Collaborator Author

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Copy Markdown
Collaborator

PR_Github #44654 [ run ] triggered by Bot. Commit: 9cce9cf Link to invocation

@tensorrt-cicd
Copy link
Copy Markdown
Collaborator

PR_Github #44654 [ run ] completed with state SUCCESS. Commit: 9cce9cf
/LLM/main/L0_MergeRequest_PR pipeline #35028 completed with status: 'SUCCESS'

CI Report

Link to invocation

@reasonsolo reasonsolo merged commit 1d0ddd2 into NVIDIA:main Apr 21, 2026
5 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants