-
Notifications
You must be signed in to change notification settings - Fork 24.7k
Foreach Test Refactor: Pointwise, Min/Max-imum #61327
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
- rewrite pointwise unittests using `ops` decorator - rewrite minimum&maximum unittests using `ops` decorator - enable minimum/maximum fastpath for BFloat16 - remove _test_data method
💊 CI failures summary and remediationsAs of commit 48b2cd4 (more details on the Dr. CI page and at hud.pytorch.org/pr/61327):
🕵️ 4 new failures recognized by patternsThe following CI failures do not appear to be due to upstream breakages:
|
test/test_foreach.py
Outdated
opinfo.sample_inputs(device, dtype, N, noncontiguous=not is_fastpath), | ||
opinfo.sample_inputs(device, dtype, N, noncontiguous=not is_fastpath), | ||
] | ||
self._pointwise_test(dtype, op, ref, inputs, is_fastpath, is_inplace=False, values=None) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
are there any tests where values
is not None?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
oh, I forgot it at all. I'm adding tests with values.
test/test_foreach.py
Outdated
def test_minmax_inf_nan(self, device, dtype, op): | ||
inputs = ( | ||
[ | ||
torch.tensor([inf], device=device, dtype=dtype), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can you please use float('inf')
etc here, and remove torch._six
imports? Also, how is this test different from the previous one?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can you please use
float('inf')
etc here, and removetorch._six
imports?
Does merge of test_minmax_inf_nan
and test_minmax_float_inf_nan
sound good to you?
Also, how is this test different from the previous one?
The only difference is the use of @ops
decorator.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I must be missing something, both tests have @ops(foreach_minmax_op_db)
, but nm, combining these 2 tests is good.
test/test_foreach.py
Outdated
try: | ||
actual = foreach_op(tensors1, tensors2, tensors3) | ||
except RuntimeError as e: | ||
with self.assertRaisesRegex(type(e), re.escape(str(e))): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
when do errors happen here? It seems like inputs corresponding to the same op are always on the same device, e.g. the below translates into
[native_op(*_cuda_tensors), native_op(*_cpu_tensors)]
which should work. Also, no harm in writing it like this, it's a bit cleaner than zip of previously zipped tensors.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
when do errors happen here?
cpu kernels don't support bfloat16 and half, but @ops(foreach_pointwiese_op_db) creates the test cases of these two dtypes. Do you think using @dtypes
decorator and limiting the used dtypes to, e.g., all_fp_dtypes?
It seems like inputs corresponding to the same op are always on the same device,...
that sounds way better, thank you.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, if the purpose of this test is to test behavior on the different devices, it's better to do it for a single datatype (e.g. float). But you probably want to actually test that inputs on the different devices through an error, and now you are not doing that.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice work, @crcrpar!
complex(1.0 - random.random(), 1.0 - random.random()), | ||
) | ||
for N, scalar in itertools.product(N_values, scalars): | ||
for N, scalar in itertools.product(N_values, Scalars): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nice refactor
@ngimel has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator. |
ops
decoratorops
decorator#58833
cc: @ptrblck @ngimel