Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Can't mark.xfail a callable test param #552

Closed
pytestbot opened this issue Jul 31, 2014 · 11 comments
Closed

Can't mark.xfail a callable test param #552

pytestbot opened this issue Jul 31, 2014 · 11 comments
Labels
type: bug problem that needs to be addressed

Comments

@pytestbot
Copy link
Contributor

Originally reported by: Tom V (BitBucket: tomviner, GitHub: tomviner)


Within a pytest.mark.parametrize decorator, marking xfail on a (non-lambda) callable like pytest.mark.xfail(func) doesn't work.

These first 3 examples contain marked xfails for the 2nd param, but still show as failed tests when run:

# xfail applied, but error still shown

def func1():
    return 1
def func2(): 
    return 2

@pytest.mark.parametrize('func_param',
    [func1, pytest.mark.xfail(func2)])
def test_xfail_param_func(func_param):
    assert func_param() == 1


class cls1: 
    val = 1
class cls2: 
    val = 2

@pytest.mark.parametrize('cls_param',
    [cls1, pytest.mark.xfail(cls2)])
def test_xfail_param_cls(cls_param):
    assert cls_param.val == 1


type_cls1 = type('cls1', (), {'val':1})
type_cls2 = type('cls2', (), {'val':2})
@pytest.mark.parametrize('type_cls_param',
    [type_cls1, pytest.mark.xfail(type_cls2)])
def test_xfail_param_type(type_cls_param):
    assert type_cls_param.val == 1

Output:

$ py.test test_xfail.py
===================================================================== test session starts =====================================================================
platform linux2 -- Python 2.7.6 -- py-1.4.22 -- pytest-2.6.0
collected 6 items 

test_xfail.py .F.F.F

========================================================================== FAILURES ===========================================================================
_____________________________________________________________ test_xfail_param_func[func_param1] ______________________________________________________________

func_param = <function func2 at 0x7ff6772ae320>

    @pytest.mark.parametrize('func_param',
        [func1, pytest.mark.xfail(func2)])
    def test_xfail_param_func(func_param):
>       assert func_param() == 1
E       assert 2 == 1
E        +  where 2 = <function func2 at 0x7ff6772ae320>()

test_xfail.py:13: AssertionError
______________________________________________________________ test_xfail_param_cls[cls_param1] _______________________________________________________________

cls_param = <class test_xfail.cls2 at 0x7ff6772917a0>

    @pytest.mark.parametrize('cls_param',
        [cls1, pytest.mark.xfail(cls2)])
    def test_xfail_param_cls(cls_param):
>       assert cls_param.val == 1
E       assert 2 == 1
E        +  where 2 = <class test_xfail.cls2 at 0x7ff6772917a0>.val

test_xfail.py:24: AssertionError
___________________________________________________________ test_xfail_param_type[type_cls_param1] ____________________________________________________________

type_cls_param = <class 'test_xfail.cls2'>

    @pytest.mark.parametrize('type_cls_param',
        [type_cls1, pytest.mark.xfail(type_cls2)])
    def test_xfail_param_type(type_cls_param):
>       assert type_cls_param.val == 1
E       assert 2 == 1
E        +  where 2 = <class 'test_xfail.cls2'>.val

test_xfail.py:32: AssertionError
============================================================= 3 failed, 3 passed in 0.03 seconds ==============================================================

And some examples where it behaves as expected, with non-callables (and lambdas):

# xfail applied, as expected, no error shown

lambda_func1 = lambda :1
lambda_func2 = lambda :2
@pytest.mark.parametrize('lambda_param',
    [lambda_func1, pytest.mark.xfail(lambda_func2)])
def test_xfail_param_lambda(lambda_param):
    assert lambda_param() == 1


func_dict = {
    'func1': func1,
    'func2': func2,
}
@pytest.mark.parametrize('str_param',
    ['func1', pytest.mark.xfail('func2')])
def test_xfail_param_str(str_param):
    func = func_dict[str_param]
    assert func() == 1


@pytest.mark.parametrize('int_param',
    [1, pytest.mark.xfail(2)])
def test_xfail_param_int(int_param):
    assert int_param == 1


dict1 = {'key': 1}
dict2 = {'key': 2}
@pytest.mark.parametrize('dict_param',
    [dict1, pytest.mark.xfail(dict2)])
def test_xfail_param_dict(dict_param):
    assert dict_param['key'] == 1

And the output, which is what I expect regardless of the type of the params:

$ py.test test_xfail.py
===================================================================== test session starts =====================================================================
platform linux2 -- Python 2.7.6 -- py-1.4.22 -- pytest-2.6.0
collected 6 items 

test_xfail.py .x.x.x

============================================================= 3 passed, 3 xfailed in 0.04 seconds =============================================================

@pytestbot
Copy link
Contributor Author

Original comment by Tom V (BitBucket: tomviner, GitHub: tomviner):


I can see what's causing this now, and I have a work around.

MarkDecorator acts differently when passed a "single callable argument". So the solution is to make sure you pass another argument to mark.xfail, and only one is valid: reason.

To see what's going on here, these two pytest.mark.parametrize expressions only differ on the reason argument:

def f1():
    return 1
def f2(): 
    return 2

>>> pytest.mark.parametrize(
    'f', [f1, pytest.mark.xfail(f2)])
Out[44]: <MarkDecorator 'parametrize' {'args': ('f', [<function f1 at 0x7f9e7b47f0c8>, <function f2 at 0x7f9e79791050>]), 'kwargs': {}}>

>>> pytest.mark.parametrize(
    'f', [f1, pytest.mark.xfail(f2, reason='returns 2')])
Out[43]: <MarkDecorator 'parametrize' {'args': ('f', [<function f1 at 0x7f9e7b47f0c8>, <MarkDecorator 'xfail' {'args': (<function f2 at 0x7f9e79791050>,), 'kwargs': {'reason': 'returns 2'}}>]), 'kwargs': {}}>

So applying this workaround to the original example above now works as expected, although realising this clearly wasn't straightforward:

@pytest.mark.parametrize('func_param',
    [func1, pytest.mark.xfail(func2, reason='returns 2'),
    ])
def test_xfail_param_func(func_param):
    assert func_param() == 1

And this works for all the param types causing unexpected erroring xfails above.

Not sure what action this issue requires now? Clearer docs at the minimum, but cleverer code to detect this scenario would be even better.

@pytestbot
Copy link
Contributor Author

Original comment by holger krekel (BitBucket: hpk42, GitHub: hpk42):


I guess passing in "reason" is the cleanest solution for now. You could probably also do pytest.mark.xfail()(func2) although that's not very obvious, either. I think this needs to be documented, am open to a PR on the according page.

We could maybe generate a warning based on detecting if the function name of a marker first arg does not have a "test" prefix. Or even assume it's meant as an argument rather directly decorating things.

@pytestbot
Copy link
Contributor Author

Original comment by Tom V (BitBucket: tomviner, GitHub: tomviner):


Re trying pytest.mark.xfail()(func2), any extra () call doesn't have any change:

>>> pytest.mark.xfail()(func2)
<function __main__.func2>

# we've accidentally applied the xfail to a function param as if it was a decorated test function
>>> vars(func2)
{'xfail': <MarkInfo 'xfail' args=() kwargs={}>}

# zero, one, or more empty calls make no difference
>>> pytest.mark.xfail(func2) == pytest.mark.xfail()(func2) == pytest.mark.xfail()()(func2)
True

I tried your suggestion of detecting if the function name of a marker first arg does not have a "test" prefix: Implementation and failing tests, because lots of functions are decorated in the tests without having test prefixed names. The tests use short one letter function names though, and in real life, would the functions in these failing tests would have to start test? (or at least matching the config:python_functions setting) Can we rely on this?

I would say nope: I've found code in the wild that mark decorates non-tests.

But maybe a warning, as you mentioned, would be helpful? There'd need to be no use case for decorating a non-test. But isn't the way pytest collects tests highly configurable, so ultimately there's no way to know if something is a test function/method/class or not?

So being conservative, we're back to a documentation note. It's an unfortunate ambiguity caused by Python's decorator flexibility. If you're in agreement Holger, I'll write a docs patch.

@pytestbot
Copy link
Contributor Author

Original comment by holger krekel (BitBucket: hpk42, GitHub: hpk42):


I agree that at this point adding a note in the docs is the best way
forward. Could you maybe do a little PR?
holger

@pytestbot
Copy link
Contributor Author

Original comment by Tom V (BitBucket: tomviner, GitHub: tomviner):


Will do

@pytestbot
Copy link
Contributor Author

@pytestbot
Copy link
Contributor Author

Original comment by holger krekel (BitBucket: hpk42, GitHub: hpk42):


Merged in tomviner/pytest (pull request #222)

fix issue552: Add a note to the docs about marking single callables

@pytestbot
Copy link
Contributor Author

Original comment by holger krekel (BitBucket: hpk42, GitHub: hpk42):


fix issue552: note about marking single callables

@pytestbot
Copy link
Contributor Author

Original comment by holger krekel (BitBucket: hpk42, GitHub: hpk42):


Merged in tomviner/pytest (pull request #222)

fix issue552: Add a note to the docs about marking single callables

@pytestbot
Copy link
Contributor Author

Original comment by holger krekel (BitBucket: hpk42, GitHub: hpk42):


Thanks tom for the PR, i think we can close this issue now unless i am missing something.

@pytestbot
Copy link
Contributor Author

Original comment by Tom V (BitBucket: tomviner, GitHub: tomviner):


Yup, there's no code fix that can be done, so all good from my end Holger.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
type: bug problem that needs to be addressed
Projects
None yet
Development

No branches or pull requests

1 participant