Skip to content

Cudagraphs: Remove fwd_graph_input_surface weakref#3970

Merged
mathemakitten merged 2 commits intoNVIDIA:mainfrom
mathemakitten:helenn-fix-weakref-cudagraph-input-surface
Mar 20, 2026
Merged

Cudagraphs: Remove fwd_graph_input_surface weakref#3970
mathemakitten merged 2 commits intoNVIDIA:mainfrom
mathemakitten:helenn-fix-weakref-cudagraph-input-surface

Conversation

@mathemakitten
Copy link
Contributor

@mathemakitten mathemakitten commented Mar 20, 2026

What does this PR do ?

Tensors in fwd_graph_input_surface which are not owned by the cudagraph memory pool should not be weakref'd. _WeakRefTensor takes a data_ptr() and constructs a weakref, then wraps that pointer back into a torch tensor based on a raw integer address, not the original tensor's storage object. Pytorch's reference counting hinges on the storage object refcount, and the caching allocator frees the block when refcount goes to 0.

The weakref created has a different storage object, so you have two tensors pointing to the same CUDA address, but only the original keeps the caching allocator from freeing it.tensor_strong_refs should keep this live in theory, but if the caching allocator is under pressure, or the cache is emptied, it might move the original tensor's data_ptr(). Since this tensor is in tensor_strong_refs and not the graph pool, it is not guaranteed a fixed address, leaving the weakref holding a stale data pointer has been freed, resulting in a segfault.

Contribution process

Pre-checks

  • I have added relevant unit tests
  • I have added relevant functional tests
  • I have added proper typing to my code Typing guidelines
  • I have added relevant documentation
  • I have run the autoformatter.sh on my PR

Code review

Feel free to message or comment the @mcore-oncall to help accelerate your merge into main. The less complex your PR is, the faster it will be approved and merged!

All PRs start as draft. If you open a non-draft PR, it will be automatically converted to draft.

Step 1: Mark PR as "Ready for Review"

  1. When your PR is ready, click Ready for Review.
  2. An oncall reviewer is auto-assigned and expert reviewers are notified based on your changes.
    • Some PRs may jump straight to step 2. This is determined by .github/CODEOWNERS.

⚠️ Only mark as ready once merge-conflicts are resolved and the CI is passing.
Final Review might get declined if these requirements are not fulfilled.

Step 2: Final Review

For PRs that change megatron/core, once all expert reviewers have approved, the Final Review label is applied automatically and final reviewers are assigned.

For PRs outside megatron/core, this step is skipped.

Step 3: Approved

Once all required reviewers have approved, the Approved label is applied automatically.

Merge

Any member of mcore-engineers will be able to merge your PR.

For MRs into `dev` branch The proposed review process for `dev` branch is under active discussion.

MRs are mergable after one approval by either eharper@nvidia.com or zijiey@nvidia.com.

@mathemakitten mathemakitten requested review from a team as code owners March 20, 2026 19:01
@svcnvidia-nemo-ci svcnvidia-nemo-ci marked this pull request as draft March 20, 2026 19:01
@github-actions
Copy link
Contributor

This PR has been automatically converted to draft because all PRs must start as drafts.

When you are ready for review, click Ready for Review to begin the review process. This will:

  1. Add the oncall reviewer (optional reviewer)
  2. Add required review teams based on your changes

See the contribution guide for more details.

@copy-pr-bot
Copy link

copy-pr-bot bot commented Mar 20, 2026

Auto-sync is disabled for draft pull requests in this repository. Workflows must be run manually.

Contributors can view more details about this message here.

@svcnvidia-nemo-ci svcnvidia-nemo-ci added this to the Core 0.16 milestone Mar 20, 2026
@mathemakitten mathemakitten marked this pull request as ready for review March 20, 2026 19:02
@svcnvidia-nemo-ci svcnvidia-nemo-ci requested a review from a team March 20, 2026 19:02
ref.can_skip_replay_copy = arg.can_skip_replay_copy
return ref

self.fwd_graph_input_surface = tree_map(
Copy link
Contributor

@jiemingz jiemingz Mar 20, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the fwd input surface might hold references to tensors in the graph pool, for instance if an fwd output is directly consumed as a fwd input?
Since _resolve_input_buffer will only allocate from the reuse pool, why not filter out only the problematic tensors?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Tested this out and it works to filter out only the tensors owned by the reuse pool. See replace_with_weak_ref_for_input_surface. I made a new function instead of adding the check to the existing replace_with_weak_ref to avoid the extra check on surfaces where it isn't needed.

Copy link
Contributor

@jiemingz jiemingz left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM thank you

@svcnvidia-nemo-ci svcnvidia-nemo-ci added the Final Review PR is in the "final review" stage label Mar 20, 2026
@svcnvidia-nemo-ci svcnvidia-nemo-ci added Approved All necessary approvals have been made and removed Final Review PR is in the "final review" stage labels Mar 20, 2026
@mathemakitten mathemakitten enabled auto-merge March 20, 2026 21:01
@mathemakitten mathemakitten added this pull request to the merge queue Mar 20, 2026
@svcnvidia-nemo-ci
Copy link

🔄 Merge queue validation started!

You can track the progress here: https://github.com/NVIDIA/Megatron-LM/actions/runs/23362863457

Merged via the queue into NVIDIA:main with commit 197242e Mar 20, 2026
61 of 64 checks passed
@mathemakitten mathemakitten deleted the helenn-fix-weakref-cudagraph-input-surface branch March 20, 2026 21:44
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Approved All necessary approvals have been made complexity: low

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants