Skip to content

Conversation

lw
Copy link
Contributor

@lw lw commented Jun 2, 2021

Stack from ghstack:

PyTorch requires users to manually record tensors with the CUDA caching allocator when switching streams. We weren't doing it.

Also, the usage of an Event can be simplified by using s1.wait(s2).

Differential Revision: D28832902

PyTorch requires users to manually record tensors with the CUDA caching allocator when switching streams. We weren't doing it.

Also, the usage of an Event can be simplified by using `s1.wait(s2)`.

Differential Revision: [D28832902](https://our.internmc.facebook.com/intern/diff/D28832902/)

[ghstack-poisoned]
@facebook-github-bot facebook-github-bot added oncall: distributed Add this issue/PR to distributed oncall triage queue cla signed labels Jun 2, 2021
@facebook-github-bot
Copy link
Contributor

facebook-github-bot commented Jun 2, 2021

💊 CI failures summary and remediations

As of commit 2dd6e82 (more details on the Dr. CI page):


  • 2/2 failures possibly* introduced in this PR
    • 1/2 non-scanned failure(s)

1 failure not recognized by patterns:

Job Step Action
CircleCI pytorch_linux_bionic_py3_8_gcc9_coverage_test1 Run tests 🔁 rerun

This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.

Please report bugs/suggestions to the (internal) Dr. CI Users group.

Click here to manually regenerate this comment.

lw added 2 commits June 3, 2021 02:25
PyTorch requires users to manually record tensors with the CUDA caching allocator when switching streams. We weren't doing it.

Also, the usage of an Event can be simplified by using `s1.wait(s2)`.

Differential Revision: [D28832902](https://our.internmc.facebook.com/intern/diff/D28832902/)

[ghstack-poisoned]
PyTorch requires users to manually record tensors with the CUDA caching allocator when switching streams. We weren't doing it.

Also, the usage of an Event can be simplified by using `s1.wait(s2)`.

Differential Revision: [D28832902](https://our.internmc.facebook.com/intern/diff/D28832902/)

[ghstack-poisoned]
Copy link
Contributor

@mrshenli mrshenli left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM!

PyTorch requires users to manually record tensors with the CUDA caching allocator when switching streams. We weren't doing it.

Also, the usage of an Event can be simplified by using `s1.wait(s2)`.

Differential Revision: [D28832902](https://our.internmc.facebook.com/intern/diff/D28832902/)

[ghstack-poisoned]
lw added 4 commits June 3, 2021 08:04
PyTorch requires users to manually record tensors with the CUDA caching allocator when switching streams. We weren't doing it.

Also, the usage of an Event can be simplified by using `s1.wait(s2)`.

Differential Revision: [D28832902](https://our.internmc.facebook.com/intern/diff/D28832902/)

[ghstack-poisoned]
PyTorch requires users to manually record tensors with the CUDA caching allocator when switching streams. We weren't doing it.

Also, the usage of an Event can be simplified by using `s1.wait(s2)`.

Differential Revision: [D28832902](https://our.internmc.facebook.com/intern/diff/D28832902/)

[ghstack-poisoned]
PyTorch requires users to manually record tensors with the CUDA caching allocator when switching streams. We weren't doing it.

Also, the usage of an Event can be simplified by using `s1.wait(s2)`.

Differential Revision: [D28832902](https://our.internmc.facebook.com/intern/diff/D28832902/)

[ghstack-poisoned]
PyTorch requires users to manually record tensors with the CUDA caching allocator when switching streams. We weren't doing it.

Also, the usage of an Event can be simplified by using `s1.wait(s2)`.

Differential Revision: [D28832902](https://our.internmc.facebook.com/intern/diff/D28832902/)

[ghstack-poisoned]
@facebook-github-bot
Copy link
Contributor

This pull request has been merged in 3e7396f.

@facebook-github-bot facebook-github-bot deleted the gh/lw/202/head branch June 7, 2021 14:17
deniskokarev pushed a commit to deniskokarev/pytorch that referenced this pull request Jun 9, 2021
Summary:
Pull Request resolved: pytorch#59297

PyTorch requires users to manually record tensors with the CUDA caching allocator when switching streams. We weren't doing it.

Also, the usage of an Event can be simplified by using `s1.wait(s2)`.
ghstack-source-id: 130583777

Test Plan: CI

Reviewed By: mrshenli

Differential Revision: D28832902

fbshipit-source-id: cd4f40ff811fa1b0042deedda2456e22f33b92bd
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

cla signed Merged oncall: distributed Add this issue/PR to distributed oncall triage queue

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants