Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add new options to report fixture setup and teardown #1647

Merged
merged 24 commits into from
Jun 25, 2016
Merged

Add new options to report fixture setup and teardown #1647

merged 24 commits into from
Jun 25, 2016

Conversation

sallner
Copy link
Member

@sallner sallner commented Jun 22, 2016

This PR implements the cli flags to show potential fixtures without actually executing it (--setup-plan) and actually testing all the setup and teardown of fixtures without executing the test functions (--setup-only).

Here's a quick checklist that should be present in PRs:

  • Target: for bug or doc fixes, target master; for new features, target features
  • Make sure to include one or more tests for your change
  • Add yourself to AUTHORS
  • Add a new entry to the CHANGELOG (choose any open position to avoid merge conflicts with other PRs)

@coveralls
Copy link

coveralls commented Jun 22, 2016

Coverage Status

Coverage increased (+0.04%) to 92.434% when pulling ecc97aa on sallner:features into 7d60fcc on pytest-dev:features.

3 similar comments
@coveralls
Copy link

Coverage Status

Coverage increased (+0.04%) to 92.434% when pulling ecc97aa on sallner:features into 7d60fcc on pytest-dev:features.

@coveralls
Copy link

Coverage Status

Coverage increased (+0.04%) to 92.434% when pulling ecc97aa on sallner:features into 7d60fcc on pytest-dev:features.

@coveralls
Copy link

coveralls commented Jun 22, 2016

Coverage Status

Coverage increased (+0.04%) to 92.434% when pulling ecc97aa on sallner:features into 7d60fcc on pytest-dev:features.

config = self._fixturemanager.config
capman = config.pluginmanager.getplugin('capturemanager')
if capman:
capman.suspendcapture()
Copy link
Member

@The-Compiler The-Compiler Jun 22, 2016

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Some earlier PR of @nicoddemus did the same, but did out, err = capman.suspendcapture() and then printed out/err to stdout/stderr after resuming capturing, as to not lose any captured content. Wouldn't the same apply here as well?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You're right, we didn't realise that .suspendcapture() + .resumecapture() loses the output unless explicitly passed through. Fixed.

@The-Compiler
Copy link
Member

The-Compiler commented Jun 22, 2016

I haven't investigated yet, but I get this with --setup-plan for some tests when I tried this with qutebrowser's repo:

tests/unit/commands/test_runners.py 
      SETUP    F fail_tests_on_warnings
      TEARDOWN F cmdline_test
      TEARDOWN F fail_tests_on_warningsE

_________________________________________________________________________________ ERROR at setup of TestCommandRunner.test_parse_all[leave-mode] __________________________________________________________________________________

self = <CallInfo when='setup' exception: 'function' object is not subscriptable>, func = <function call_runtest_hook.<locals>.<lambda> at 0x7f6df9cc3950>, when = 'setup'

    def __init__(self, func, when):
        #: context of invocation: one of "setup", "call",
        #: "teardown", "memocollect"
        self.when = when
        self.start = time()
        try:
>           self.result = func()

.tox/py35/lib/python3.5/site-packages/_pytest/runner.py:163: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
.tox/py35/lib/python3.5/site-packages/_pytest/runner.py:151: in <lambda>
    return CallInfo(lambda: ihook(item=item, **kwds), when=when)
.tox/py35/lib/python3.5/site-packages/_pytest/vendored_packages/pluggy.py:724: in __call__
    return self._hookexec(self, self._nonwrappers + self._wrappers, kwargs)
.tox/py35/lib/python3.5/site-packages/_pytest/vendored_packages/pluggy.py:338: in _hookexec
    return self._inner_hookexec(hook, methods, kwargs)
.tox/py35/lib/python3.5/site-packages/_pytest/vendored_packages/pluggy.py:333: in <lambda>
    _MultiCall(methods, kwargs, hook.spec_opts).execute()
.tox/py35/lib/python3.5/site-packages/_pytest/vendored_packages/pluggy.py:595: in execute
    return _wrapped_call(hook_impl.function(*args), self.execute)
.tox/py35/lib/python3.5/site-packages/_pytest/vendored_packages/pluggy.py:253: in _wrapped_call
    return call_outcome.get_result()
.tox/py35/lib/python3.5/site-packages/_pytest/vendored_packages/pluggy.py:278: in get_result
    raise ex[1].with_traceback(ex[2])
.tox/py35/lib/python3.5/site-packages/_pytest/vendored_packages/pluggy.py:264: in __init__
    self.result = func()
.tox/py35/lib/python3.5/site-packages/_pytest/vendored_packages/pluggy.py:595: in execute
    return _wrapped_call(hook_impl.function(*args), self.execute)
.tox/py35/lib/python3.5/site-packages/_pytest/vendored_packages/pluggy.py:253: in _wrapped_call
    return call_outcome.get_result()
.tox/py35/lib/python3.5/site-packages/_pytest/vendored_packages/pluggy.py:278: in get_result
    raise ex[1].with_traceback(ex[2])
.tox/py35/lib/python3.5/site-packages/_pytest/vendored_packages/pluggy.py:264: in __init__
    self.result = func()
.tox/py35/lib/python3.5/site-packages/_pytest/vendored_packages/pluggy.py:595: in execute
    return _wrapped_call(hook_impl.function(*args), self.execute)
.tox/py35/lib/python3.5/site-packages/_pytest/vendored_packages/pluggy.py:253: in _wrapped_call
    return call_outcome.get_result()
.tox/py35/lib/python3.5/site-packages/_pytest/vendored_packages/pluggy.py:278: in get_result
    raise ex[1].with_traceback(ex[2])
.tox/py35/lib/python3.5/site-packages/_pytest/vendored_packages/pluggy.py:264: in __init__
    self.result = func()
.tox/py35/lib/python3.5/site-packages/_pytest/vendored_packages/pluggy.py:596: in execute
    res = hook_impl.function(*args)
.tox/py35/lib/python3.5/site-packages/_pytest/runner.py:100: in pytest_runtest_setup
    item.session._setupstate.prepare(item)
.tox/py35/lib/python3.5/site-packages/_pytest/runner.py:421: in prepare
    col.setup()
.tox/py35/lib/python3.5/site-packages/_pytest/python.py:1799: in setup
    fillfixtures(self)
.tox/py35/lib/python3.5/site-packages/_pytest/python.py:851: in fillfixtures
    request._fillfixtures()
.tox/py35/lib/python3.5/site-packages/_pytest/python.py:1947: in _fillfixtures
    item.funcargs[argname] = self.getfuncargvalue(argname)
.tox/py35/lib/python3.5/site-packages/_pytest/python.py:1990: in getfuncargvalue
    return self._get_active_fixturedef(argname).cached_result[0]
.tox/py35/lib/python3.5/site-packages/_pytest/python.py:2007: in _get_active_fixturedef
    result = self._getfuncargvalue(fixturedef)
.tox/py35/lib/python3.5/site-packages/_pytest/python.py:2072: in _getfuncargvalue
    val = fixturedef.execute(request=subrequest)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

self = <FixtureDef name='cmdline_test' scope='function' baseid='tests' >, request = <SubRequest 'cmdline_test' for <Function 'test_parse_all[leave-mode]'>>

    def execute(self, request):
        # get required arguments and register our own finish()
        # with their finalization
        kwargs = {}
        for argname in self.argnames:
            fixturedef = request._get_active_fixturedef(argname)
            result, arg_cache_key, exc = fixturedef.cached_result
            request._check_scope(argname, request.scope, fixturedef.scope)
            kwargs[argname] = result
            if argname != "request":
                fixturedef.addfinalizer(self.finish)

        my_cache_key = request.param_index
        cached_result = getattr(self, "cached_result", None)
        if cached_result is not None:
            result, cache_key, err = cached_result
            if my_cache_key == cache_key:
                if err is not None:
                    py.builtin._reraise(*err)
                else:
                    return result
            # we have a previous but differently parametrized fixture instance
            # so we need to tear it down before creating a new one
            self.finish()
            assert not hasattr(self, "cached_result")

        fixturefunc = self.func

        if self.unittest:
            if request.instance is not None:
                # bind the unbound method to the TestCase instance
                fixturefunc = self.func.__get__(request.instance)
        else:
            # the fixture function needs to be bound to the actual
            # request.instance so that code working with "self" behaves
            # as expected.
            if request.instance is not None:
                fixturefunc = getimfunc(self.func)
                if fixturefunc != self.func:
                    fixturefunc = fixturefunc.__get__(request.instance)

        try:
            config = request.config
            if config.option.setupplan:
                result = None
            else:
                result = call_fixture_func(fixturefunc, request, kwargs)
            if config.option.setuponly or config.option.setupplan:
                # We want to access the params of ids if they exist also in during
                # the finish() method.
                if hasattr(request, 'param'):
                    if self.ids:
                        ind = self.params.index(request.param)
>                       self.cached_param = self.ids[ind]
E                       TypeError: 'function' object is not subscriptable

.tox/py35/lib/python3.5/site-packages/_pytest/python.py:2541: TypeError

tests/unit/commands/test_runners.py 
      SETUP    F fail_tests_on_warnings
      TEARDOWN F cmdline_test
      TEARDOWN F fail_tests_on_warningsE

@kvas-it
Copy link
Member

kvas-it commented Jun 23, 2016

Your qutebrowser test error is caused by ids being a function instead of a list. We missed that case, I will fix it.

@coveralls
Copy link

coveralls commented Jun 23, 2016

Coverage Status

Coverage increased (+0.03%) to 92.427% when pulling 1a5e530 on sallner:features into 7d60fcc on pytest-dev:features.

@kvas-it
Copy link
Member

kvas-it commented Jun 23, 2016

@The-Compiler the qutebrowser issue should be fixed by the last commit.

@coveralls
Copy link

coveralls commented Jun 23, 2016

Coverage Status

Coverage increased (+0.03%) to 92.428% when pulling c6af737 on sallner:features into 7d60fcc on pytest-dev:features.

@The-Compiler
Copy link
Member

The-Compiler commented Jun 23, 2016

Interestingly enough this now triggers a segfault in Qt/PyQt 😆 Pretty sure that's not your fault though.

@kvas-it
Copy link
Member

kvas-it commented Jun 24, 2016

The segfault happens in this line:

        tw.write('[{0}]'.format(self.cached_param))

Where self.cached_param is the parameter of the fixture. It's not completely clear from the report which test was being set up, but I guess it's test_signal_name, which would mean that the parameter is SignalObject().signal1 (which is a pyqtSignal()). Could it be that it segfaults when format tries to repr it?

@The-Compiler
Copy link
Member

That seems like a PyQt issue indeed, reported to their mailing list.

@coveralls
Copy link

coveralls commented Jun 25, 2016

Coverage Status

Coverage increased (+0.06%) to 92.461% when pulling 32ca5cd on sallner:features into 7d60fcc on pytest-dev:features.

@coveralls
Copy link

coveralls commented Jun 25, 2016

Coverage Status

Coverage increased (+0.06%) to 92.461% when pulling 32ca5cd on sallner:features into 7d60fcc on pytest-dev:features.

@hpk42 hpk42 merged commit 13a188f into pytest-dev:features Jun 25, 2016
@hpk42
Copy link
Contributor

hpk42 commented Jun 25, 2016

thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants