[https://nvbugs/6104831][fix] Free recv buffer index on cancelled-after-ready disagg generation request#13673
Draft
chienchunhung wants to merge 3 commits intoNVIDIA:mainfrom
Draft
Conversation
…disagg cancellation. Signed-off-by: Chien-Chun Hung <2679986+chienchunhung@users.noreply.github.com>
…ation Signed-off-by: Chien-Chun Hung <2679986+chienchunhung@users.noreply.github.com>
…er-ready disagg generation request Fix signature NVIDIA#6 of NVBug 6104831 — a recv-buffer index leak that becomes a permanent global wedge under combined cancel+retry+long-prompt disagg load. The fix is two-layer because the leak has two distinct exit paths. The signature #1 fix on the sender side correctly sends is_ready=false for cancelled-after-ready requests; on the receiver side that becomes bool isReady = false from receiveReadySignal() in CacheReceiver::Impl::requestSync(). The pre-fix early return in requestSync sets kDISAGG_TRANS_ERROR and returns without calling receiveSync(), so unformat() never runs and the recv buffer index reserved at the top of CacheReceiver::Impl::sendRequestInfo() is leaked. mRecvBufferCount defaults to 1 for the NIXL agent backend, so a single leaked recv buffer index is enough to wedge every subsequent assignBufferIndexForRecv() call forever inside the unbounded cv.wait in BaseTransBufferManager::assignBufferIndex. Layer A — sendRequestInfo() exception safety. Track every (BaseTransBufferManager*, std::optional<size_t>) pair returned by assignBufferIndexForRecv() in a local vector. Wrap the rest of the function body in try {...} catch (...) { freeAssignedRecvBuffers(); throw; } so any exception between assignment and the eventual freeBufferIndexForRecv() call inside unformat() releases the indices. On the success path the local tracking vector is explicitly cleared because ownership has been handed off to the AgentConnection's mCacheBufferIds, which unformat() will free. Layer B — requestSync() !isReady cleanup. Mirror what unformat() does on the success path. In the !isReady early-return branch, iterate the session's connections, look up each pre-assigned recv buffer ID via agentConnection->getPreAssignedBufferId(...), and free it via mgr->freeBufferIndexForRecv(id). The new test test_cancelled_after_ready_does_not_leak_recv_buffer_index uses the NIXL backend (the only backend that goes through assignBufferIndexForRecv), drives one full ctx/gen handshake to completion, exercises the cancelled-after-ready path once, and then issues a follow-up generation request on a worker thread with a 10s probe timeout. Pre-fix the worker thread stays alive past the timeout because assignBufferIndexForRecv() blocks; post-fix the follow-up request completes normally. This PR is chained on top of NVIDIA#13640 (sig #1 fix) because the !isReady early-return path is only reachable once the sender-side cancellation correctly sends is_ready=false. Signed-off-by: Chien-Chun Hung <2679986+chienchunhung@users.noreply.github.com> Made-with: Cursor
chienchunhung
added a commit
to chienchunhung/TensorRT-LLM
that referenced
this pull request
Apr 30, 2026
…13673/NVIDIA#13674 into the investigation report The four sig NVIDIA#4 / NVIDIA#5 / NVIDIA#6 PRs are now open against NVIDIA/TensorRT-LLM: - Sig NVIDIA#4: chained pair NVIDIA#13674 (test) -> NVIDIA#13671 (fix; carries 2 commits including the test, both PRs target main so they can be merged in order) - Sig NVIDIA#5: combined test + fix in NVIDIA#13672 (independent of the #1 chain) - Sig NVIDIA#6: combined test + fix in NVIDIA#13673, chained on top of NVIDIA#13640 (the #1 fix is a prerequisite for the !isReady early-return path) Update the Signature <-> PR Map, the per-signature Status / PRs blocks, and the Next Steps items 2 and 3 to reference these PR numbers and mark the corresponding follow-up items as done. Signed-off-by: Chien-Chun Hung <2679986+chienchunhung@users.noreply.github.com> Made-with: Cursor
chienchunhung
added a commit
to chienchunhung/TensorRT-LLM
that referenced
this pull request
Apr 30, 2026
… PRs landed Apply nine consistency fixes against the post-NVIDIA#13671/NVIDIA#13672/NVIDIA#13673/NVIDIA#13674 state of the investigation: 1. Front-matter Status block: replace the "sig NVIDIA#6 root-caused, validation in flight" wording with the post-run8 picture (all 6 TRT-LLM PRs in review; NVIDIA#7 is an out-of-scope NIXL bug; deadline work is the TRT-LLM-side fallback for NVIDIA#7). 2. Front-matter Branches in this worktree: add the four new sig NVIDIA#4 / NVIDIA#5 / NVIDIA#6 branches. 3. Front-matter Related PRs: add NVIDIA#13674 / NVIDIA#13671 / NVIDIA#13672 / NVIDIA#13673 with chained-on-NVIDIA#13640 callout for NVIDIA#13673. 4. "Configurations that did not reproduce": NVIDIA#5 and NVIDIA#6 now do reproduce in single-process unit tests via the new tests added by NVIDIA#13672 and NVIDIA#13673; only NVIDIA#3 and NVIDIA#7 remain field-only. 5. Phase 6 close: the sig NVIDIA#4 regression test is no longer isolated in local/rc11-disagg-repro - it is now in the chained NVIDIA#13674 / NVIDIA#13671 pair. 6. Signature NVIDIA#6 section: drop the "(suspected)" qualifier and the "(most likely)" hedging on Where-it-lives - both are confirmed by run7 and run8. Rename the section header to describe the actual failure shape (recv buffer index leak via !isReady early return wedging assignBufferIndexForRecv) rather than the early control-path-stall hypothesis. Mirror the rename in the Signature - PR Map row. 7. File / Branch Index "New unit tests": add the new sig NVIDIA#5 (test_cancel_queued_gen_request_fulfills_receiver_future) and sig NVIDIA#6 (test_cancelled_after_ready_does_not_leak_recv_buffer_index, NIXL backend) tests. 8. Signature NVIDIA#3 status hypothesis: add a one-paragraph note that NIXL (signature NVIDIA#7) is now also a candidate cause of the half-initialized state, so a future field hit is not misattributed to a fresh TRT-LLM bug. 9. Phase 5 narrative: add a forward link explaining that the underlying terminal driver of the Phase-5 wedge was already NVIDIA#7 (NIXL), but NVIDIA#4 was the visible TRT-LLM-side symptom because the gen event loop was self-blocking before any of the later layers could surface. Signed-off-by: Chien-Chun Hung <2679986+chienchunhung@users.noreply.github.com> Made-with: Cursor
chienchunhung
added a commit
to chienchunhung/TensorRT-LLM
that referenced
this pull request
Apr 30, 2026
… section Fold two related pieces of analysis into the report as a new section between the Investigation Timeline and "Why the Existing Tests Did Not Catch This": (1) Signature taxonomy refining the naive "burst -> timeout -> cancellation -> bug" framing. Four-of-seven signatures (#1, NVIDIA#3, NVIDIA#5, NVIDIA#6) are direct cancellation-handling bugs; NVIDIA#4 is a structural latent blocking bug that cancellations expose; NVIDIA#2 is an eviction-driven bug that burst traffic exposes via memory pressure; NVIDIA#7 is a NIXL-internal contention bug that the same load shape happens to trigger but which is not strictly a cancellation bug. Includes a refined trigger chain diagram and two precise corrections (burst alone is not the trigger; "cancellation" is one of several entry points to cleanup paths). (2) Cascade map distinguishing two kinds of inter-signature tangling: - Type 1 (a fix produces a new signature): only one case, the #1 fix produces NVIDIA#6 by making the receiver-side !isReady early-return path reachable in production where a latent recv-buffer leak existed. This is why NVIDIA#6 PR (NVIDIA#13673) is explicitly chained on #1 fix PR (NVIDIA#13640). - Type 2 (a fix exposes a pre-existing signature): three cases where NVIDIA#4 fix exposes NVIDIA#5 / NVIDIA#6 and NVIDIA#6 fix exposes NVIDIA#7 because the upstream fix removes the masking effect on the downstream bug. These are not regressions of the fixes; they were latent pre-existing issues. - Subtler third relationship: NVIDIA#4 is structurally a defensive catcher for any upstream bug that produces a never-resolving receiver future. The NVIDIA#4 fix is independently valuable as defence in depth, not just a symptomatic patch. Also includes a fix-to-file mapping showing that the fixes do not overlap in code; the only structural dependency is the NVIDIA#6 -> #1 chain enforced by the PR base. Signed-off-by: Chien-Chun Hung <2679986+chienchunhung@users.noreply.github.com> Made-with: Cursor
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
@coderabbitai summary
Description
Test Coverage
PR Checklist
Please review the following before submitting your PR:
PR description clearly explains what and why. If using CodeRabbit's summary, please make sure it makes sense.
PR Follows TRT-LLM CODING GUIDELINES to the best of your knowledge.
Test cases are provided for new code paths (see test instructions)
Any new dependencies have been scanned for license and vulnerabilities
CODEOWNERS updated if ownership changes
Documentation updated as needed
Update tava architecture diagram if there is a significant design change in PR.
The reviewers assigned automatically/manually are appropriate for the PR.
Please check this after reviewing the above items as appropriate for this PR.
GitHub Bot Help
To see a list of available CI bot commands, please comment
/bot help.