Skip to content

cuda devices should have same dtype #25470

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 1 commit into from

Conversation

jfc4050
Copy link
Contributor

@jfc4050 jfc4050 commented Aug 30, 2019

addresses #25465

was passing in two tensors of different dtypes for a check making sure the two tensors were on the same device.

@pytorchbot pytorchbot added the oncall: distributed Add this issue/PR to distributed oncall triage queue label Aug 30, 2019
Copy link
Contributor

@mrshenli mrshenli left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah, yes, I misread the error message, it's the data type not device type.

Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@mrshenli has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@facebook-github-bot
Copy link
Contributor

@mrshenli merged this pull request in e26305e.

@pietern
Copy link
Contributor

pietern commented Sep 2, 2019

@jfc4050 This test is skipped on CPU-only builds. Can you post a follow up change to break it out into a non-CUDA and CUDA test case with checks? I would like to run all tests associated with allreduce_coalesced regardless of CUDA being available or not. Then for any tests that require CUDA, we can have a separate test case (or guard the check in some other way).

@jfc4050
Copy link
Contributor Author

jfc4050 commented Sep 3, 2019

@jfc4050 This test is skipped on CPU-only builds. Can you post a follow up change to break it out into a non-CUDA and CUDA test case with checks? I would like to run all tests associated with allreduce_coalesced regardless of CUDA being available or not. Then for any tests that require CUDA, we can have a separate test case (or guard the check in some other way).

sure! we were actually talking about doing this in another thread but I hadn't gotten around to it yet. Made a PR(#25555) and CC'd you

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Merged oncall: distributed Add this issue/PR to distributed oncall triage queue
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants