New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Test fail (instead of skip) on empty parameter set #2527

Closed
robert-cody opened this Issue Jun 24, 2017 · 18 comments

Comments

Projects
None yet
3 participants
@robert-cody

robert-cody commented Jun 24, 2017

I need to fail (instead of skip) tests that got empty parameters set.
My test looks like this:

def parameters_provider():
    # HTTP request to some external provider service.
    return service_request().result or []

@pytest.mark.parametrize("parameter", parameters_provider())
def test_something(parameter):
    assert parameter is not None

So if provider fails to return results, test gets empty parameters set and pytest skips it, but for me this is actually error - kind of no tests run and nothing actually checked, so test must fail.

After some investigation i've found pretty nasty way to achieve my goal, in conftest.py:

def pytest_runtest_teardown(item, nextitem):
    skip = item.get_marker("skip")
    if skip:
        reason = skip.kwargs.get("reason")
        if reason.startswith("got empty parameter set"):
            raise ValueError(reason)

I don't like it because it depends on reason text which can change in any next pytest release. Is there any better way to do it?

@RonnyPfannschmidt

This comment has been minimized.

Member

RonnyPfannschmidt commented Jun 24, 2017

There isn't a better way yet

@nicoddemus

This comment has been minimized.

Member

nicoddemus commented Jun 24, 2017

Erroring out by default could be problematic as well I suppose

@robert-cody

This comment has been minimized.

robert-cody commented Jun 24, 2017

I see this code in pytest sources:

if not argvalues:
    argvalues = [(NOTSET,) * len(argnames)]
    # we passed a empty list to parameterize, skip that test
    #
    fs, lineno = getfslineno(self.function)
    newmark = pytest.mark.skip(
        reason="got empty parameter set %r, function %s at %s:%d" % (
            argnames, self.function.__name__, fs, lineno))
    newkeywords = [{newmark.markname: newmark}]

There are all parameters set to NOTSET sentinel value, maybe there is some way to check if all parameters is NOTSET, then raise? I didn't find any way to get these argvalues from item passed into pytest_runtest_teardown.

@robert-cody

This comment has been minimized.

robert-cody commented Jun 24, 2017

Or maybe there is something else i can return from provider (not empty list) so test will fail, but collection will succeed and all other tests will run also?

@RonnyPfannschmidt

This comment has been minimized.

Member

RonnyPfannschmidt commented Jun 24, 2017

return a senseless parameterset and put a mark onto it you can detect

@robert-cody

This comment has been minimized.

robert-cody commented Jun 24, 2017

@RonnyPfannschmidt thanks for suggestion, implemented this way:

PARAMETER_NOT_SET = object()

def parameters_provider():
    # HTTP request to some external provider service.
    return service_request().result or [PARAMETER_NOT_SET]

@pytest.mark.parametrize("parameter", parameters_provider())
def test_something(parameter):
    assert parameter is not None

def pytest_pyfunc_call(pyfuncitem):
    arg_values = pyfuncitem.funcargs.values()
    if arg_values and all(av is PARAMETER_NOT_SET for av in arg_values):
        pytest.fail("Test function got empty parameter set!")
@nicoddemus

This comment has been minimized.

Member

nicoddemus commented Jun 25, 2017

Should we close this then?

@robert-cody

This comment has been minimized.

robert-cody commented Jun 25, 2017

If you don't mind, i'd prefer to make it optional - fail on empty set or skip. I'll make pull request for this a bit later if you think it can be useful.

@nicoddemus

This comment has been minimized.

Member

nicoddemus commented Sep 28, 2017

Thanks for the offer to open a PR, my personal opinion is that it should fail by default, adding another option will only complicate things. @RonnyPfannschmidt what's your take on this?

@RonnyPfannschmidt

This comment has been minimized.

Member

RonnyPfannschmidt commented Sep 28, 2017

If prefer to avoid complication in that code

@robert-cody

This comment has been minimized.

robert-cody commented Sep 29, 2017

I'm also agree that it should fail by default, but for some reason it was implemented to skip such tests, that's why i proposed make it an option. If you think that nobody need skip and it won't break compatibility, let it be so. Personally i think there might be many cases when users don't know that some of their tests even not get executed and currently it's a problem.

@RonnyPfannschmidt

This comment has been minimized.

Member

RonnyPfannschmidt commented Sep 29, 2017

I implemented the skip as people demonstrated valid use cases for empty parameter sets a while back

@robert-cody

This comment has been minimized.

robert-cody commented Sep 29, 2017

@RonnyPfannschmidt so your choice is option for skip/fail or only fail?
Usually it's better when test marked for skip explicitly - helps to avoid false positives.

@RonnyPfannschmidt

This comment has been minimized.

Member

RonnyPfannschmidt commented Sep 29, 2017

Would xfail without run fit your use case?

If an option is introduced it should enable skip (default), xfail, fail and ignore, optionally via marker to choose per test

@robert-cody

This comment has been minimized.

robert-cody commented Sep 29, 2017

In my case xfail won't be enough because it's still success, but empty data from data provider is always an error and must be reported.

You mean skip as default for backwards compatibility only? It doesn't look reasonable and not really intuitive. Say like if you specify -k 'unexistent_test' nothing will be executed and result code will be != 0 mean error - is correct, but success in this case would be confusing.

@RonnyPfannschmidt

This comment has been minimized.

Member

RonnyPfannschmidt commented Sep 29, 2017

The default is for backward compat

@nicoddemus

This comment has been minimized.

Member

nicoddemus commented Sep 29, 2017

I don't know, empty parameter sets seems such a rare occurrence to me that it should fail by default, as it is most likely an error.

I would propose to change the default to "error", specially if users that intend for it to skip can do this:

@pytest.mark.skipif(not get_parameter_values(), reason='empty parameters')
@pytest.mark.parametrize('a', get_parameter_values())
def test_foo(a):
   pass

Then at least whoever wants to keep the existing "skip" behavior can have it.

@RonnyPfannschmidt can you please search that discussion about the points in favor of "skip"? It would help this discussion for sure.

@RonnyPfannschmidt

This comment has been minimized.

Member

RonnyPfannschmidt commented Sep 30, 2017

RonnyPfannschmidt added a commit to RonnyPfannschmidt/pytest that referenced this issue Dec 17, 2017

RonnyPfannschmidt added a commit to RonnyPfannschmidt/pytest that referenced this issue Dec 17, 2017

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment