Delete DeviceGuard(int64_t) constructor.#13232
Closed
ezyang wants to merge 12 commits intoexport-D10855883from
Closed
Delete DeviceGuard(int64_t) constructor.#13232ezyang wants to merge 12 commits intoexport-D10855883from
ezyang wants to merge 12 commits intoexport-D10855883from
Conversation
Differential Revision: D10858024 Differential Version: 61911842
This was referenced Oct 29, 2018
Differential Revision: D10858024 Differential Version: 61940689
Differential Revision: D10858024 Differential Version: 61943786
Differential Revision: D10858024 Differential Version: 61944580
Differential Revision: D10858024 Differential Version: 61945978
Differential Revision: D10858024 Differential Version: 61950975
Differential Revision: D10858024 Differential Version: 61951987
teng-li
approved these changes
Oct 29, 2018
Contributor
|
CI is failing |
Differential Revision: D10858024 Differential Version: 61956053
Differential Revision: D10858024 Differential Version: 61998100
ezyang
commented
Oct 30, 2018
| if (should_compute_output(1)) { | ||
| at::DeviceGuard device_guard(src_device); | ||
| if (grad.is_cuda() && grad.get_device() != src_device) { | ||
| if (grad.device() != src_device) { |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
ezyang
commented
Oct 30, 2018
|
|
||
| const int32_t tensor_device = self.is_cuda() ? self.get_device() : -1; | ||
| at::DeviceGuard device_guard(device_index.value_or(tensor_device)); | ||
| at::DeviceGuard device_guard(self.device()); |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
Differential Revision: D10858024 Differential Version: 62022323
Differential Revision: D10858024 Differential Version: 62043486
Differential Revision: D10858024 Differential Version: 62135960
zdevito
pushed a commit
to zdevito/ATen
that referenced
this pull request
Oct 31, 2018
Summary: Pull Request resolved: pytorch/pytorch#13232 DeviceGuard should be device agnostic, which means that it shouldn't assume that int64_t means select the CUDA device. Reviewed By: gchanan Differential Revision: D10858024 fbshipit-source-id: b40e8337e4046906fd8f83a95e6206367fb29dbe
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Stack:
:white_circle: #13133 Add c10::Stream, make at::cuda::CUDAStream use it. 💛
:black_circle: #13232 Delete DeviceGuard(int64_t) constructor. 💛
:white_circle: #13275 Replace CUDA-specific set_index(_from) method from DeviceGuard with set_device. 💛
:white_circle: #13342 Generalize DeviceGuard to work with arbitrary DeviceType. 💛
DeviceGuard should be device agnostic, which means that it shouldn't
assume that int64_t means select the CUDA device.
Differential Revision: D10858024