Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add device guard around MPI operations #22446

Closed
wants to merge 1 commit into from

Conversation

pietern
Copy link
Contributor

@pietern pietern commented Jul 2, 2019

If the current CUDA device is not the same as the device that hosts
the tensor the operation works on then OpenMPI will segfault, as
reported in #21922. This changes adds a device guard for every
operation to ensure the correct device is set.

Fixes #21922.

If the current CUDA device is not the same as the device that hosts
the tensor the operation works on then OpenMPI will segfault, as
reported in pytorch#21922. This changes adds a device guard for every
operation to ensure the correct device is set.

Fixes pytorch#21922.
@pytorchbot pytorchbot added the oncall: distributed Add this issue/PR to distributed oncall triage queue label Jul 2, 2019
@pietern pietern added the triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module label Jul 2, 2019
Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@pietern is landing this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@facebook-github-bot
Copy link
Contributor

@pietern merged this pull request in c9f41e9.

xzhu1900 pushed a commit to xzhu1900/pytorch that referenced this pull request Jul 5, 2019
Summary:
If the current CUDA device is not the same as the device that hosts
the tensor the operation works on then OpenMPI will segfault, as
reported in pytorch#21922. This changes adds a device guard for every
operation to ensure the correct device is set.

Fixes pytorch#21922.
Pull Request resolved: pytorch#22446

Differential Revision: D16106823

Pulled By: pietern

fbshipit-source-id: 99d762eb3851c0a0e0b4fe81cf27c1c8d35596cc
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Merged oncall: distributed Add this issue/PR to distributed oncall triage queue triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Segmentation fault using all_reduce with cuda:1 (MPI)
5 participants