Skip to content

Conversation

@s0undt3ch
Copy link
Contributor

Recently I needed to create my own custom MarkEvaluator, I called it FixtureMarkEvaluator.

It looks up at fixture results and decides to skip a test based on that:

@pytest.mark.skipiffixture("auth_config['auth_config'] == 'internal'",
                           # Only run this against external authers which are the
                           # only ones which thrown unmanaged resource exceptions
                           fixtures=('auth_config',),
                           reason='Test not applicable to the internal auth config')
def test_create_unmanaged_user_removes_cached(self, client, auth_config):
    pass

The alternative would be to skip the test on the test function body, but by then, our DB prep routines(which are extensive) would have been executed only for the test to be skipped.
This way, we skip before those prep routines are even executed.

I ended up with something like:

class FixtureMarkEvaluator(_pytest.skipping.MarkEvaluator):
    def _getglobals(self):
        evaluation_globals = super(FixtureMarkEvaluator, self)._getglobals()
        for fixture_name in self.holder.kwargs.get('fixtures', ()):
            # Looking up fixture results will add some finalizers which, once/if tests are skipped
            # by this marker, it will make pytest fail because of the added finalizer.
            # We have 2 options, run the finalizer ourselves, or, just restore the previously
            # existing finalizers ignoring any actions those added finalizers would do(for our
            # current needs, we can ignore the finalizers)

            # Store the finalizers before getting the fixture value and restore it afterwards
            original_finalizers = self.item._request.session._setupstate._finalizers.copy()

            # Inject the fixture value into the globals that should be passed on to evaluate the marker
            try:
                evaluation_globals[fixture_name] = self.item._request.getfuncargvalue(fixture_name)
            finally:
                # Restore previous finalizers state
                self.item._request.session._setupstate._finalizers = original_finalizers
        return evaluation_globals

    def istrue(self):
        result = super(FixtureMarkEvaluator, self).istrue()
        if self.holder:
            try:
                # In our specific use case, we're evaluating against a dictionary which changes
                # Remove the evaluated expr from the cache since the cache is
                # not aware of the arguments used to get the result
                self.item.config._evalcache.pop(self.expr)
            except KeyError:
                pass
        return result

This would just work, however, again, for our specific use case, we depend on an external library on one of the fixtures for which we use this custom marker, and calling pytest.skip on it would just make the exception get caught by this catch all exception and we would end up with a not so helpful error message:

self = <tests.conftest.FixtureMarkEvaluator object at 0x7fdc62db0240>

    def istrue(self):
>       result = super(FixtureMarkEvaluator, self).istrue()
E       AttributeError: 'FixtureMarkEvaluator' object has no attribute 'expr'

tests/conftest.py:101: AttributeError

The approach in this PR is to just catch OutcomeExceptions and raise them as opposed to just let them get "caught" by the catch all exception.
The approach in this PR is to just catch Skipped and raise it as opposed to just let it get "caught" by the catch all exception(we don't catch OutcomeException because while evaluating _istrue() a Failed exception can be raised and catching OutcomeException would change the way Failed is handled.

Please consider this PR as is, catching and re-raisng the OutcomeExceptions should not trigger any regression, if fact, the skipping code should know how to handle outcome exceptions and not treat them as any other unknown kind of exception.
Please consider this PR as is, catching and re-raisng the Skipped should not trigger any regression, if fact, the skipping code should know how to handle Skipped exceptions and not treat them as any other unknown kind of exception.

If a test case is mandatory, I'd have to recreate our use case in a single test(possible, but probably very specific and not that much helpful as an example), still, I could try to create such test to get this code in since it's breaking our pytest usage experience...

@s0undt3ch
Copy link
Contributor Author

Hmm, this does break testing/test_skipping.py::TestEvaluator::test_marked_skip_with_not_string ....

@RonnyPfannschmidt
Copy link
Member

the description doesn't match the patch

there should be documentation that this is a workaround and will eventually be killed and/or a pytest_warning

there should be tests for this change that does in fact change external behaviour

@s0undt3ch
Copy link
Contributor Author

  1. Description updated to match patch(kept the old approach Strikethrough for context)
  2. This is not meant as a workaroud, it's mean as a way to allow calling pytest.skip on a custom skipif implementation if need be. How/when/why would it be killed? And, I'm really new to contributing to pytest so, what would pytest_warning do for this use case?
  3. Still waiting on AppVeyor results, but only catching and raising Skipped does not seem to break existing behavior(considering travis results), however, I'll try to create a minimal test which at least tests proper handling of this raise Skipped exceptions.

@RonnyPfannschmidt
Copy link
Member

  1. well done

  2. an boolean evaluation and an exception are very different kinds of data flow
    if fixture resolution and mark evaluation get integrated in a more generic way,
    the skip would not happen at evaluate time, but way before instead
    as such i consider the handling of a skip at that particular place a workaround until a more general solution can be formulated

    a nice way to solve it for example would be to consider fixtures as parameters to mark evaluation functions, in which cases mark evaluation and fixture evaluation would not intersect within the mark evaluation

    final implementation is left open, since we will likely need the sprint for pytest 3.0 to find a good way

  3. that observation is correct, its very unlikely to hit actual users, from the technical side we still have to consider it, as this construct adds complexity in various places

@s0undt3ch
Copy link
Contributor Author

I added a test case which exemplifies my abnormal use, probably even abuse, of a custom skip fixture which injects fixture values into the globals context for proper truthiness evaluation...

@s0undt3ch s0undt3ch force-pushed the hotfix/raise-skipped branch from b160738 to a58ec32 Compare February 17, 2016 14:08
@RonnyPfannschmidt
Copy link
Member

there are some real failures in there, can you have a look?

@s0undt3ch
Copy link
Contributor Author

Yes, I noticed but still haven't had the time to get them fixed. I'll take care of them as soon as possible.

@RonnyPfannschmidt
Copy link
Member

thanks for the update :) take your time

@nicoddemus
Copy link
Member

@s0undt3ch just a gentle ping. 😁

@nicoddemus
Copy link
Member

@s0undt3ch, we plan a release around 1 week earlier than EuroPython... do you think you could work on finishing this PR until then?

@flub
Copy link
Member

flub commented Sep 19, 2016

Creating your own mark evaluator is not really a public API. I think an alternative way, and slightly better/more officially supported way, of doing this is by using a mark on the test: @pytest.mark.auth_config('internal') and then you can let the fixture which does not want to run it's setup look at this mark and call pytest.skip before it starts doing it's setup.

@RonnyPfannschmidt
Copy link
Member

at first glance it looks like this one is not going anywhere at the current point in time,
so i'm closing it for now

@s0undt3ch please open a new one if you find some time to fix the merge conflicts/test failures

@s0undt3ch
Copy link
Contributor Author

Sorry for being so slow on this one.

Yes, I'll reopen when I have more time to work on this.

@s0undt3ch s0undt3ch deleted the hotfix/raise-skipped branch August 28, 2020 13:17
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants