Cudagraphs: Remove fwd_graph_input_surface weakref#3970
Conversation
|
This PR has been automatically converted to draft because all PRs must start as drafts. When you are ready for review, click Ready for Review to begin the review process. This will:
See the contribution guide for more details. |
|
Auto-sync is disabled for draft pull requests in this repository. Workflows must be run manually. Contributors can view more details about this message here. |
| ref.can_skip_replay_copy = arg.can_skip_replay_copy | ||
| return ref | ||
|
|
||
| self.fwd_graph_input_surface = tree_map( |
There was a problem hiding this comment.
I think the fwd input surface might hold references to tensors in the graph pool, for instance if an fwd output is directly consumed as a fwd input?
Since _resolve_input_buffer will only allocate from the reuse pool, why not filter out only the problematic tensors?
There was a problem hiding this comment.
Tested this out and it works to filter out only the tensors owned by the reuse pool. See replace_with_weak_ref_for_input_surface. I made a new function instead of adding the check to the existing replace_with_weak_ref to avoid the extra check on surfaces where it isn't needed.
|
🔄 Merge queue validation started! You can track the progress here: https://github.com/NVIDIA/Megatron-LM/actions/runs/23362863457 |
What does this PR do ?
Tensors in
fwd_graph_input_surfacewhich are not owned by the cudagraph memory pool should not be weakref'd._WeakRefTensortakes adata_ptr()and constructs a weakref, then wraps that pointer back into a torch tensor based on a raw integer address, not the original tensor's storage object. Pytorch's reference counting hinges on the storage object refcount, and the caching allocator frees the block when refcount goes to 0.The weakref created has a different storage object, so you have two tensors pointing to the same CUDA address, but only the original keeps the caching allocator from freeing it.
tensor_strong_refsshould keep this live in theory, but if the caching allocator is under pressure, or the cache is emptied, it might move the original tensor'sdata_ptr(). Since this tensor is intensor_strong_refsand not the graph pool, it is not guaranteed a fixed address, leaving the weakref holding a stale data pointer has been freed, resulting in a segfault.Contribution process
Pre-checks
Code review
Feel free to message or comment the @mcore-oncall to help accelerate your merge into main. The less complex your PR is, the faster it will be approved and merged!
All PRs start as draft. If you open a non-draft PR, it will be automatically converted to draft.
Step 1: Mark PR as "Ready for Review"
.github/CODEOWNERS.Final Review might get declined if these requirements are not fulfilled.
Step 2: Final Review
For PRs that change
megatron/core, once all expert reviewers have approved, theFinal Reviewlabel is applied automatically and final reviewers are assigned.For PRs outside
megatron/core, this step is skipped.Step 3: Approved
Once all required reviewers have approved, the
Approvedlabel is applied automatically.Merge
Any member of mcore-engineers will be able to merge your PR.
For MRs into `dev` branch
The proposed review process for `dev` branch is under active discussion.MRs are mergable after one approval by either
eharper@nvidia.comorzijiey@nvidia.com.