Skip to content

Conversation

mruberry
Copy link
Collaborator

@mruberry mruberry commented Aug 22, 2020

This PR adds a new test suite, test_ops.py, designed for generic tests across all operators with OpInfos. It currently has two kinds of tests:

  • it validates that the OpInfo has the correct supported dtypes by verifying that unsupported dtypes throw an error and supported dtypes do not
  • it runs grad and gradgrad checks on each op and its variants (method and inplace) that has an OpInfo

This is a significant expansion and simplification of the current autogenerated autograd tests, which spend considerable processing their inputs. As an alternative, this PR extends OpInfos with "SampleInputs" that are much easier to use. These sample inputs are analogous to the existing tuples inmethod_tests().

Future PRs will extend OpInfo-based testing to other uses of method_tests(), like test_jit.py, to ensure that new operator tests can be implemented entirely using an OpInfo.

@mruberry mruberry changed the title Adds opinfo-based autograd tests and (un)supported dtype tests [WIP] Adds opinfo-based autograd tests and (un)supported dtype tests Aug 22, 2020
@dr-ci
Copy link

dr-ci bot commented Aug 22, 2020

💊 CI failures summary and remediations

As of commit 932c71b (more details on the Dr. CI page):


  • 2/2 failures possibly* introduced in this PR
    • 2/2 non-CircleCI failure(s)

ci.pytorch.org: 2 failed


This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.

Please report bugs/suggestions on the GitHub issue tracker or post in the (internal) Dr. CI Users group.

See how this bot performed.

This comment has been revised 26 times.

low, high, requires_grad: bool = False) -> torch.Tensor:
"""Returns a tensor of the specified size on the given device and dtype.
The tensors values are between -9 and 9, inclusive, unless low (high)
is not None in which case the values are between max(-9, low) and
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you explain the reason why you make this choice of using the max instead of just using the provided low value?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Low could be -inf and we only want to generate values in a finite range, so we need some stopping point. if we only represent finite domains then it would make sense to actually use low, however that may run into other issues as some of our tests will fail when float values become too large due to cross-platform discrepancies.

test/test_ops.py Outdated
output.backward(t)
inplace_output.backward(t)

self.assertEqual(sample.input.grad, inplace_input.grad)
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Move to gradcheck and gradgradcheck (as appropriate)

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The test for same dtypes has been moved to gradcheck

@mruberry mruberry changed the title [WIP] Adds opinfo-based autograd tests and (un)supported dtype tests Adds opinfo-based autograd tests and (un)supported dtype tests Aug 28, 2020
@mruberry mruberry requested a review from apaszke as a code owner August 28, 2020 08:17
@mruberry mruberry requested review from albanD and removed request for apaszke August 28, 2020 08:22
@codecov
Copy link

codecov bot commented Aug 30, 2020

Codecov Report

Merging #43451 into master will increase coverage by 0.03%.
The diff coverage is 93.15%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master   #43451      +/-   ##
==========================================
+ Coverage   69.31%   69.34%   +0.03%     
==========================================
  Files         378      378              
  Lines       46745    46801      +56     
==========================================
+ Hits        32403    32456      +53     
- Misses      14342    14345       +3     
Impacted Files Coverage Δ
torch/autograd/gradcheck.py 87.20% <71.42%> (-0.56%) ⬇️
torch/testing/_internal/common_device_type.py 84.57% <83.33%> (-0.15%) ⬇️
...ch/testing/_internal/common_methods_invocations.py 91.70% <96.15%> (+0.58%) ⬆️
torch/testing/_internal/common_utils.py 77.23% <97.05%> (+0.61%) ⬆️
torch/testing/_internal/expecttest.py 78.57% <0.00%> (+1.02%) ⬆️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 7f967c0...932c71b. Read the comment docs.

Copy link
Collaborator

@albanD albanD left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the updates. LGTM

Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@mruberry has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@facebook-github-bot
Copy link
Contributor

@mruberry merged this pull request in 665feda.

@t-vi
Copy link
Collaborator

t-vi commented Sep 4, 2020

https://ci.pytorch.org/jenkins/job/pytorch-builds/job/pytorch-linux-bionic-rocm3.7-py3.6-test2/721/console

This breaks because it uses self.skipTest with unsupported signatures (only takes one string arg).

@t-vi
Copy link
Collaborator

t-vi commented Sep 4, 2020

I'll send a fix.

@mruberry
Copy link
Collaborator Author

mruberry commented Sep 4, 2020

https://ci.pytorch.org/jenkins/job/pytorch-builds/job/pytorch-linux-bionic-rocm3.7-py3.6-test2/721/console

This breaks because it uses self.skipTest with unsupported signatures (only takes one string arg).

Yes, you're correct. It's meant to be a single string. Looks like that code path just didn't get tested.

facebook-github-bot pushed a commit that referenced this pull request Sep 8, 2020
Summary:
Fixes a broken skipTest from #43451, e.g. in the ROCm CI.

Pull Request resolved: #44181

Reviewed By: ngimel

Differential Revision: D23568608

Pulled By: malfet

fbshipit-source-id: 557048bd5f0086ffac38d1c48255badb63869899
nondet_tol: float = 0.0,
check_undefined_grad: bool = True
check_undefined_grad: bool = True,
check_grad_dtypes: bool = False
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@mruberry this new arg was not documented here. Is that on purpose on just an oversight?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't recall offhand what we decided. This is probably an oversight? We may have thought to not document it at the time so we'd have more time to review the UX for this function later, though.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok. Do you think it is stable enough now that we should document it? Or it is still fairly internal?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Your call. I'm not planning any more changes to this function.

float_dtype = torch.float if dtype is torch.cfloat else torch.double
real = torch.rand(size, device=device, dtype=float_dtype) * span - (span / 2)
imag = torch.rand(size, device=device, dtype=float_dtype) * span - (span / 2)
c = torch.complex(real, imag)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the above code can be rewritten as c = torch.rand(size, device=device, dtype=dtype) * span - span/2 * (1+1j)

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

PRs welcome ;)

Welcome (kinda?) back, @anjali411!

@facebook-github-bot facebook-github-bot deleted the op_info_grad_check branch January 27, 2021 18:27
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants