New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Adopt PyTorch's test util to torchscript test #640
Conversation
Codecov Report
@@ Coverage Diff @@
## master #640 +/- ##
=======================================
Coverage 88.82% 88.82%
=======================================
Files 21 21
Lines 2220 2220
=======================================
Hits 1972 1972
Misses 248 248 Continue to review full report at Codecov.
|
@@ -209,7 +197,7 @@ def func(tensor): | |||
|
|||
def test_lfilter(self): | |||
if self.dtype == torch.float64: | |||
pytest.xfail("This test is known to fail for float64") | |||
raise unittest.SkipTest("This test is known to fail for float64") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd make this a decorator then and have separate tests for each dtype
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think that's outside of the scope of this PR.
Following this PR, with pytorch 1.5.0 I get the following.
I'm assuming this is only compatible with nighlties? |
offline follow-up with @mthrok -- yup. |
scheduler.step() per epoch not per batch
@cpuhrsch
Part 1 of PyTorch test framework adaptation. Applied to torchscript tests and some functional test.
If this looks good, I will expand the adaptation to other tests.