Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Stops common_utils.py from setting the default tensor type (to torch.DoubleTensor) #27444

Closed
wants to merge 8 commits into from

Conversation

mruberry
Copy link
Collaborator

@mruberry mruberry commented Oct 6, 2019

This PR stop common_utils.py from setting the default tensor type when it's imported. See issue #27355. This is a frequent source of confusion for test writers.

Many tests relied on this setting (whether they knew it or not), and this PR also updates the test suite to pass without common_utils.py setting the default tensor type. Some larger test files now set the default floating dtype themselves, however. These test files are:

  • test_autograd.py
  • test_distributions.py
  • test_jit.py
  • test_nn.py

This is still a significant improvement from today, however. First, these files set the default floating dtype much more clearly than importing it from common_utils. Second, the rest of the test suite no longer sets this globally. Third, this PR is a springboard to updating those tests, too. In particular, as tests are made generic they can be moved aways from relying on this global setting.

Notable technical changes in this PR are:

  • Significant updates to test_torch.py to make it pass without setting the default floating dtype globally.
  • The @default_floating_dtype decorator is now defined in common_utils, a couple versions of this operator were defined in test files previously.
  • test_torch-specific parts of common_utils were refactored into test_torch.
  • tensor creation methods in common_utils were updated to accept an optional dtype and device.

@pytorchbot pytorchbot added oncall: jit Add this issue/PR to JIT oncall triage queue module: autograd Related to torch.autograd, and the autograd engine in general module: cuda Related to torch.cuda, and CUDA support in general oncall: distributed Add this issue/PR to distributed oncall triage queue module: distributions Related to torch.distributions module: nn Related to torch.nn module: operators module: optimizer Related to torch.optim module: sparse Related to torch.sparse module: tests Issues related to tests (not the torch.testing module) labels Oct 6, 2019
@pytorchbot pytorchbot added the module: dataloader Related to torch.utils.data.DataLoader and Sampler label Oct 6, 2019
@pytorchbot pytorchbot added the module: cpp-extensions Related to torch.utils.cpp_extension label Oct 6, 2019
@mruberry mruberry changed the title [WIP - Testing] Removes common_utils.py set default tensor type Stops common_utils.py from setting the default tensor type (to torch.DoubleTensor) Oct 7, 2019
@mruberry
Copy link
Collaborator Author

mruberry commented Oct 7, 2019

The ASAN build failures are unrelated but mandate a retest. Will wait until review in case there are changes (which will the retrigger the CI).

@ezyang
Copy link
Contributor

ezyang commented Oct 7, 2019

How are you "testing" this change? :)

return u.matmul(s.expand(batch_dims + (matrix_size, matrix_size)).matmul(v.transpose(-2, -1)))


def lu_solve_test_helper(self, A_dims, b_dims, cast, pivot):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Heh, dead code, nice!

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This code was inlined into test_torch, its only caller.


# TODO: remove this global setting
# Autograd tests use double as the default dtype
torch.set_default_dtype(torch.double)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So, I recall that you noticed that the cuda autograd tests were done with float. Is that still the case after this change?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I must have been mistaken. Previously an equivalent import came from common_utils, and this setting also applies to CUDA tensors. There is no functional change here.

Copy link
Contributor

@ezyang ezyang left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In the interest of avoiding merge conflicts I'm approving this after only a cursory review. I am still curious about how the changes are bbeing "validated" though.

@mruberry
Copy link
Collaborator Author

mruberry commented Oct 7, 2019

"Validation" is an interesting question.

We have changed the way tests work in a majority of the Python test suite to use float instead of double for floating point tensors.

Tests that can pass with this change are left with it. This is a change to our testing surface (we are testing double tensors much less). Tests that require the default floating dtype to be double maintain the previous behavior. I was conservative in keeping tests in double to avoid this PR being too large, hence the four test files that declare the default dtype to be double immediately.

Aside from the test suite ensuring this change does not break our CI, I also plan to do a follow-up review on test coverage (which will include dtype coverage). Another issue with this plan is that third-parties may have relied (knowingly or otherwise) on common_utils setting the default tensor dtype. However I think it would be productive to break their test suites and make them explicitly declare the behavior if they like, or start ensuring their float behavior works as expected.

@mruberry
Copy link
Collaborator Author

mruberry commented Oct 7, 2019

@pytorchbot rebase this please.

@mruberry
Copy link
Collaborator Author

mruberry commented Oct 7, 2019

Rebase is to validate ASAN builds pass test CI. They failed to build in the previous CI run.

Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@mruberry has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@mruberry has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@mruberry mruberry deleted the default_test_tensor branch October 8, 2019 16:57
@facebook-github-bot
Copy link
Contributor

@mruberry merged this pull request in 7f183a9.

thiagocrepaldi pushed a commit to thiagocrepaldi/pytorch that referenced this pull request Feb 4, 2020
…DoubleTensor) (pytorch#27444)

Summary:
This PR stop common_utils.py from setting the default tensor type when it's imported. See issue pytorch#27355. This is a frequent source of confusion for test writers.

Many tests relied on this setting (whether they knew it or not), and this PR also updates the test suite to pass without common_utils.py setting the default tensor type. Some larger test files now set the default floating dtype themselves, however. These test files are:

- test_autograd.py
- test_distributions.py
- test_jit.py
- test_nn.py

This is still a significant improvement from today, however. First, these files set the default floating dtype much more clearly than importing it from common_utils. Second, the rest of the test suite no longer sets this globally. Third, this PR is a springboard to updating those tests, too. In particular, as tests are made generic they can be moved aways from relying on this global setting.

Notable technical changes in this PR are:

- Significant updates to test_torch.py to make it pass without setting the default floating dtype globally.
- The default_floating_dtype decorator is now defined in common_utils, a couple versions of this operator were defined in test files previously.
- test_torch-specific parts of common_utils were refactored into test_torch.
- tensor creation methods in common_utils were updated to accept an optional dtype and device.
Pull Request resolved: pytorch#27444

Differential Revision: D17795235

Pulled By: mruberry

fbshipit-source-id: 7f77271c0c836e69f183ad9057a2c4b29f09d2e1
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Merged module: autograd Related to torch.autograd, and the autograd engine in general module: cpp-extensions Related to torch.utils.cpp_extension module: cuda Related to torch.cuda, and CUDA support in general module: dataloader Related to torch.utils.data.DataLoader and Sampler module: distributions Related to torch.distributions module: nn Related to torch.nn module: optimizer Related to torch.optim module: sparse Related to torch.sparse module: tests Issues related to tests (not the torch.testing module) oncall: distributed Add this issue/PR to distributed oncall triage queue oncall: jit Add this issue/PR to JIT oncall triage queue
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants