Skip to content

Makes a streaming backward test try gradient stealing more directly #60065

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 2 commits into from

Conversation

mcarilli
Copy link
Collaborator

@mcarilli mcarilli commented Jun 16, 2021

Closes #59846.

#59846 is likely paranoia, and some of the test_streaming_backward_* in test_cuda.py already use gradient stealing (ie, they start with .grads as None before backward). Regardless, this PR augments one of the tests to stress gradient stealing a bit more directly.

@facebook-github-bot
Copy link
Contributor

facebook-github-bot commented Jun 16, 2021

💊 CI failures summary and remediations

As of commit e2769ad (more details on the Dr. CI page and at hud.pytorch.org/pr/60065):


  • 1/1 failures introduced in this PR

1 failure not recognized by patterns:

Job Step Action
GitHub Actions Lint / shellcheck Assert that regenerating the workflows didn't change them 🔁 rerun

Preview docs built from this PR

This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.

Please report bugs/suggestions to the (internal) Dr. CI Users group.

Click here to manually regenerate this comment.

@mcarilli mcarilli requested review from mruberry and ngimel June 16, 2021 02:59
@mcarilli mcarilli added module: cuda Related to torch.cuda, and CUDA support in general module: autograd Related to torch.autograd, and the autograd engine in general labels Jun 16, 2021
@mcarilli mcarilli changed the title Small streaming backwards test augment Makes a streaming backward test try gradient stealing more directly Jun 16, 2021
@mruberry
Copy link
Collaborator

This seems fine but I'll let @ngimel review.

@anjali411 anjali411 added the triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module label Jun 16, 2021
@facebook-github-bot
Copy link
Contributor

@ngimel has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@facebook-github-bot
Copy link
Contributor

@ngimel merged this pull request in 9fb6b40.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cla signed Merged module: autograd Related to torch.autograd, and the autograd engine in general module: cuda Related to torch.cuda, and CUDA support in general open source triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Need a test to make sure the autograd engine inserts proper leaf stream syncs for stolen gradients
6 participants