Replies: 1 comment
-
You might be able to find something useful in the Skipping tests section (in case you haven't read that yet). To follow your example, you could mark this test as expected to fail, like this: import random
import pytest
@pytest.mark.xfail
def test_example():
assert 0 == random.randrange(1, 9) You also mentioned that a callable would be preferred, you could use import random
import pytest
@pytest.mark.xfail
def test_example():
try:
fail = 1 / 0
except Exception:
pytest.xfail('Expected to fail') Note that all |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
suppose I have a test like this:
running this test with
pytest /tmp/test.py
will produce:in the commandline I can add
--tb=no
to hide the details of the failure.Is it possible to trigger this output modification from within the code?
A constant decision to hide would be good, a callable would be perfect.
I've tried
__tracebackhide__
, but it doesn't have the desired effect. Even though from the names it sounds like it would do the same as--tb=no
Background: I have several thousand test cases through parametrisation. Many of them are equivalent, but it's somewhat hard to weed out the equivalent cases but retain the full coverage.
Beta Was this translation helpful? Give feedback.
All reactions