-
Notifications
You must be signed in to change notification settings - Fork 21.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Stops common_utils.py from setting the default tensor type (to torch.DoubleTensor) #27444
Conversation
The ASAN build failures are unrelated but mandate a retest. Will wait until review in case there are changes (which will the retrigger the CI). |
How are you "testing" this change? :) |
return u.matmul(s.expand(batch_dims + (matrix_size, matrix_size)).matmul(v.transpose(-2, -1))) | ||
|
||
|
||
def lu_solve_test_helper(self, A_dims, b_dims, cast, pivot): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Heh, dead code, nice!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This code was inlined into test_torch, its only caller.
|
||
# TODO: remove this global setting | ||
# Autograd tests use double as the default dtype | ||
torch.set_default_dtype(torch.double) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So, I recall that you noticed that the cuda autograd tests were done with float. Is that still the case after this change?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I must have been mistaken. Previously an equivalent import came from common_utils, and this setting also applies to CUDA tensors. There is no functional change here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In the interest of avoiding merge conflicts I'm approving this after only a cursory review. I am still curious about how the changes are bbeing "validated" though.
"Validation" is an interesting question. We have changed the way tests work in a majority of the Python test suite to use float instead of double for floating point tensors. Tests that can pass with this change are left with it. This is a change to our testing surface (we are testing double tensors much less). Tests that require the default floating dtype to be double maintain the previous behavior. I was conservative in keeping tests in double to avoid this PR being too large, hence the four test files that declare the default dtype to be double immediately. Aside from the test suite ensuring this change does not break our CI, I also plan to do a follow-up review on test coverage (which will include dtype coverage). Another issue with this plan is that third-parties may have relied (knowingly or otherwise) on common_utils setting the default tensor dtype. However I think it would be productive to break their test suites and make them explicitly declare the behavior if they like, or start ensuring their float behavior works as expected. |
@pytorchbot rebase this please. |
Rebase is to validate ASAN builds pass test CI. They failed to build in the previous CI run. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@mruberry has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@mruberry has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
…DoubleTensor) (pytorch#27444) Summary: This PR stop common_utils.py from setting the default tensor type when it's imported. See issue pytorch#27355. This is a frequent source of confusion for test writers. Many tests relied on this setting (whether they knew it or not), and this PR also updates the test suite to pass without common_utils.py setting the default tensor type. Some larger test files now set the default floating dtype themselves, however. These test files are: - test_autograd.py - test_distributions.py - test_jit.py - test_nn.py This is still a significant improvement from today, however. First, these files set the default floating dtype much more clearly than importing it from common_utils. Second, the rest of the test suite no longer sets this globally. Third, this PR is a springboard to updating those tests, too. In particular, as tests are made generic they can be moved aways from relying on this global setting. Notable technical changes in this PR are: - Significant updates to test_torch.py to make it pass without setting the default floating dtype globally. - The default_floating_dtype decorator is now defined in common_utils, a couple versions of this operator were defined in test files previously. - test_torch-specific parts of common_utils were refactored into test_torch. - tensor creation methods in common_utils were updated to accept an optional dtype and device. Pull Request resolved: pytorch#27444 Differential Revision: D17795235 Pulled By: mruberry fbshipit-source-id: 7f77271c0c836e69f183ad9057a2c4b29f09d2e1
This PR stop common_utils.py from setting the default tensor type when it's imported. See issue #27355. This is a frequent source of confusion for test writers.
Many tests relied on this setting (whether they knew it or not), and this PR also updates the test suite to pass without common_utils.py setting the default tensor type. Some larger test files now set the default floating dtype themselves, however. These test files are:
This is still a significant improvement from today, however. First, these files set the default floating dtype much more clearly than importing it from common_utils. Second, the rest of the test suite no longer sets this globally. Third, this PR is a springboard to updating those tests, too. In particular, as tests are made generic they can be moved aways from relying on this global setting.
Notable technical changes in this PR are: