Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Raise CollectError if pytest.skip() is called during collection #1519

Merged
merged 2 commits into from
Jun 25, 2016
Merged

Raise CollectError if pytest.skip() is called during collection #1519

merged 2 commits into from
Jun 25, 2016

Conversation

omarkohl
Copy link
Contributor

@omarkohl omarkohl commented Apr 17, 2016

I'll add the CHANGELOG and documentation once I get some feedback if this is correct.

This change effectively disables using pytest.skip at module level (even as a normal function not only as a decorator) because that is how I understand issue #607 .

@nicoddemus:

@nicoddemus @hpk should we issue a pytest_warning if pytest.skip happens outside of the runtestprotocol?

Sounds good, but as we commented before, the current warnings system could use some improvements.

@nicoddemus:

On second thought I think this should actually fail the collection of that module.

This earlier comment by @hpk42 is also an option:

What about making pytest.skip fail when a function object is passed and point to "pytest.mark.skipif" and the docs?

I tried checking the type of the msg parameter in pytest.skip and it achieves the desired result.

Here's a quick checklist that should be present in PRs:

  • Target: for bug or doc fixes, target master; for new features, target features
  • Make sure to include one or more tests for your change
  • Add yourself to AUTHORS
  • Add a new entry to the CHANGELOG (choose any open position to avoid merge conflicts with other PRs)

pytest.skip() must not be used at module level because it can easily be
misunderstood and used as a decorator instead of pytest.mark.skip, causing the
whole module to be skipped instead of just the test being decorated.

This is unexpected for users used to the @unittest.skip decorator and therefore
it is best to bail out with a clean error when it happens.

The pytest equivalent of @unittest.skip is @pytest.mark.skip .

Adapt existing tests that were actually relying on this behaviour and add a
test that explicitly test that collection fails.

fix #607

@RonnyPfannschmidt
Copy link
Member

good thinking to just catch it - that way around is far more easy to detect, and requires far less logic by simply requiring the user to correct his misuse

i'm a bit surprised by the "unrelated" tests removals,
as far as i can tells we can only publish this as a major release as it breaks external contracts (which is a bit unfortunate but it seems practical to avoid the work/complexity overhead of retaining backward compatibility strictly in that case)

i'm also surprised that most testing is via junit-xml, i believe/hope this can be done with the normal inline run at least

could you take a look at how easy/feasible that is

@omarkohl
Copy link
Contributor Author

@RonnyPfannschmidt I know what you mean by 'unrelated tests removal' (even though they are not really unrelated because the removed/modified tests all rely on pytest.skip at a module level).

This is what I previously wrote to @nicoddemus by mail:

There are actually a few tests that rely on the fact that pytest.skip()
will skip the entire module. I'm assuming those tests will need to be
modified. Or is there any legitimate use case for using pytest.skip() at
a module level?

He recommended creating a pull request and discussing it here :-)

I agree that this commit breaks external contracts because currently it is possible to use pytest.skip() to skip the entire module. Some people might actually use this on purpose. Using @pytest.skip as a decorator is the real problem I think because it looks like it only applies to a single test function.

This could be legitimate:

import pytest
pytest.skip("This whole module is currently broken")

def test_one():
    ...

This is misleading:

import pytest

@pytest.skip("Test is broken")
def test_one():
    ...

i'm also surprised that most testing is via junit-xml, i believe/hope this can be done with the normal inline run at least

The main test for my change is test_module_level_skip_error() in test_skipping.py. The other tests I simply adapted because they were failing. The new tests in test_junitxml.py are just copying the existing test_skip_contains_name_reason().

Is test_module_level_skip_error() not good enough? What should I change?

@RonnyPfannschmidt
Copy link
Member

thanks for the clarifications, it makes me wonder if we should test junitxml output in a more localized manner in future

with the "unrelated" test i meant the change in result-log - its basically a legacy test result output format
with your clarification that change is fine

i would appreciate the return of removed test in runner as it is a very local,
can you take a quick look at whats missing in pytester so the test can be done with reasonable effort

also well done 👍

@hpk42 and @nicoddemus : i think we need a target branch for this one as it cant be part of just a feature release, since its a "breaking change" - alternatively we can bump the next version in the features branch to 3.0 ?

@RonnyPfannschmidt RonnyPfannschmidt added type: enhancement new feature or API change, should be merged into features branch topic: reporting related to terminal output and user-facing messages and errors type: backward compatibility might present some backward compatibility issues which should be carefully noted in the changelog labels Apr 17, 2016
@RonnyPfannschmidt RonnyPfannschmidt added this to the 3.0 milestone Apr 17, 2016
@omarkohl
Copy link
Contributor Author

i would appreciate the return of removed test in runner as it is a very local,

You mean avoid removing test_skip_at_module_scope from test_runner.py? That test verifies that it is possible to skip a complete module with pytest.skip (i.e. assert rep.skipped) but this is no longer the case. Now we can only assert that an error is raised. Isn't that already tested in test_skipping.test_module_level_skip_error ?

can you take a quick look at whats missing in pytester so the test can be done with reasonable effort

I don't understand what you mean. Could you give some more details?

@nicoddemus
Copy link
Member

Hi @omarkohl, first of all thanks for all the work!

This could be legitimate:

That example could easily be changed to:

import pytest
pytestmark = pytest.mark.skip(reason="This whole module is currently broken")
def test_one():
    ...

So IMHO this would be reasonable to ask users to change this rather unusual use-case in face of the benefits of someone misusing @pytest.skip as a decorator.

@RonnyPfannschmidt

i think we need a target branch for this one as it cant be part of just a feature release, since its a "breaking change" - alternatively we can bump the next version in the features branch to 3.0

I'm not entirely sure this qualifies as an API change, it seems to me more that it fixes an API misuse. On the other hand, pytest's own test suite relied on this, so one might argue that it is actually a feature change. 😁

I'm fine with this going to 3.0 only.

@RonnyPfannschmidt
Copy link
Member

@nicoddemus i agree that the old behavior was unintended/bad, however fixing it required a change in how certain exceptions propagate, and thus how py.test interacts with the testing of stuff

this will turns skips at user projects to collection errors, thus "breaking" their builds on update
since we know it will break builds for up-streams,

if we want to be able to push this into a feature release, it would have to us a different mechanism to warn the user of his missuse

@omarkohl
Copy link
Contributor Author

@RonnyPfannschmidt @nicoddemus can this be merged (for 3.0)? I can have a look at the conflict in that case...

@nicoddemus
Copy link
Member

Yes please... we probably won't break anything serious, but this type of misbehavior should change.

@coveralls
Copy link

coveralls commented Jun 22, 2016

Coverage Status

Coverage decreased (-0.1%) to 92.354% when pulling b418227 on omarkohl:pytest_skip_decorator into 0f7aeaf on pytest-dev:features.

@omarkohl
Copy link
Contributor Author

Hmm that is a lot of commits... I just created a merge commit to merge my two commits into 'features'. Two minor merge issues were fixed...

Next time i'll rebase instead.

@nicoddemus
Copy link
Member

@omarkohl, could you please rebase this branch now? I think we all can agree that this will go into 3.0. 😁

pytest.skip() must not be used at module level because it can easily be
misunderstood and used as a decorator instead of pytest.mark.skip, causing the
whole module to be skipped instead of just the test being decorated.

This is unexpected for users used to the @unittest.skip decorator and therefore
it is best to bail out with a clean error when it happens.

The pytest equivalent of @unittest.skip is @pytest.mark.skip .

Adapt existing tests that were actually relying on this behaviour and add a
test that explicitly test that collection fails.

fix #607
@omarkohl
Copy link
Contributor Author

@nicoddemus I rebased as we discussed...

@coveralls
Copy link

coveralls commented Jun 24, 2016

Coverage Status

Coverage decreased (-0.04%) to 92.346% when pulling b3615ac on omarkohl:pytest_skip_decorator into f2bb3df on pytest-dev:features.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
topic: reporting related to terminal output and user-facing messages and errors type: backward compatibility might present some backward compatibility issues which should be carefully noted in the changelog type: enhancement new feature or API change, should be merged into features branch
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants