-
Notifications
You must be signed in to change notification settings - Fork 53
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Improve xfails by allowing specifying expected errors | test(torchlib) #799
Conversation
Thanks @justinchuby this is great! Can we guard on the error message too? Exception type feels a bit broad, for example I may want to flag change in behavior when ORT starts to complain about mismatching type instead of type not implemented. |
The error message is harder because there isn't a pytest decorator for it. I could implement one but that will take longer. Do you have any suggestions? Thanks! |
return DecorateMeta( | ||
op_name=op_name, | ||
variant_name=variant_name, | ||
decorator=unittest.expectedFailure, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For my own understanding, why do we seem to decorate xfail
at two places? One here, and the other down at
def normal_xfail_skip_test_behaviors(
...
if test_behavior == "xfail":
pytest.xfail(reason=reason)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There is the decorator that will decorate the test method, and there are context managers that handle the subtests. I haven't found a good way to unify the two. I may be possible for us to create our own decorators but I am not sure how it is supported by pytest?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's not supported..
if raises is None: | ||
decorator = pytest.mark.xfail(reason=reason) | ||
else: | ||
decorator = pytest.mark.xfail(reason=reason, raises=raises) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Need to set strict = True
?
https://docs.pytest.org/en/7.3.x/reference/reference.html#pytest-mark-xfail-ref
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
When strict=True any passing subtests will fail the whole test method. We have cases where some subtests fail and we may still want to run the test. Any suggestions?
if test_behavior == "xfail": | ||
pytest.xfail(reason=reason) | ||
if raises is not None: | ||
with pytest.raises(raises): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Passing in the unittest instance, can we do
if test_behavior == "xfail":
if raises is not None:
with test_case.assertRaisesRegex(raises, expected_error_message_regex):
raise
pytest.xfail(reason=reason)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nvm, I think we need a context manager similar to this but wrapped over subtest calling.
edit: wait.. this function is already that context manager, so it should work?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It should work for any subtests, but I am not sure how to support the test methods?
A naive way would be we do string match on reason and exception message in I guess it doesn't have to be strict. Just point out the error occurred location and type. |
That's for the subtest I think? I am thinking if there's a way to use the decorators |
Do we have more cases needing xfail on whole test? The other dtypes? |
Yes. Sometimes ORT fails for the op so we xfail the whole test |
RuntimeError
instead ofAssertionError
when ORT raises an error in test for more accurate xfails.Fixes #794