New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Introduce common utility for defining test matrix for device/dtype #616
Conversation
aa39d2f
to
d7256e6
Compare
Codecov Report
@@ Coverage Diff @@
## master #616 +/- ##
=======================================
Coverage 88.99% 89.00%
=======================================
Files 21 21
Lines 2254 2255 +1
=======================================
+ Hits 2006 2007 +1
Misses 248 248
Continue to review full report at Codecov.
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
|
||
if __name__ == '__main__': | ||
unittest.main() | ||
common_utils.define_test_suites(globals(), [Functional, Transforms]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
alright, looks like pytorch is passing globals()
@@ -33,25 +30,12 @@ def test_clamp(self): | |||
b_coeffs = torch.tensor([1, 0], dtype=self.dtype, device=self.device) | |||
a_coeffs = torch.tensor([1, -0.95], dtype=self.dtype, device=self.device) | |||
output_signal = F.lfilter(input_signal, a_coeffs, b_coeffs, clamp=True) | |||
self.assertTrue(output_signal.max() <= 1) | |||
assert output_signal.max() <= 1 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: changing self.assertTrue
to assert
? oh that's because the class isn't inheriting from unittest
. I see.
|
||
|
||
if __name__ == '__main__': | ||
unittest.main() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
removing unittest is good to me.
I'm really not a fan of not inheriting from PyTorch's unittest class. Sure we can argue that some designs such as overwriting self.assertEqual aren't perfect, but the entire system relies on it and we should comply with whatever the core library defines as "equal". Otherwise we stand to fail to test for new developments such as type upcasting or new Tensor properties. |
offline discussion: I will look into this. |
I agree. I suggest we revert the PR, and redo each item separately. @mthrok -- Can you elaborate on the action items you have discussed? |
Reverting this will adds more work for the same outcome so I would not.
|
Enabling tensor index from random_split
The authors had renamed the paper to 'Auto-Encoding Variational Bayes'. This is more familiar to the current research community. I have just renamed the README to its current title, which matches their ICLR 2013 version.
To work on #613, I added a helper function which simplifies the logic to generate test suites for different device, which also extends it to dtype.
As a result, torchscript consistency test is performed on (cpu, cuda) x (float32, float64) matrix. (float64 is newly added)
Resample
) is not compatible with float64, which is fixed in this PR.lfilter
does not yield consistent result among Python and torchscript only whenfloat64
.I looked at it but do not have an immediate fix so I marked them as expected to fail.