Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

AttributeError: 'Function' object has no attribute '_skipped_by_mark' #3074

Closed
elliterate opened this issue Jan 3, 2018 · 4 comments
Closed
Labels
plugin: logging related to the logging builtin plugin type: regression indicates a problem that was introduced in a release which was working previously

Comments

@elliterate
Copy link
Contributor

If test execution aborts before the skipping module's pytest_runtest_setup hook executes (e.g., via pytest.skip() in an earlier setup hook), the skipping module's pytest_runtest_makereport hook fails with:

AttributeError: 'Function' object has no attribute '_skipped_by_mark'

The skipping module's reporting hook assumes that its setup hook executed and thus that it is safe to read the _skipped_by_mark attribute on the test item. If test execution aborts in an earlier setup hook, however, this assumption will be incorrect and its reporting hook will fail.

Note: This error is the same as seen in #2982 and #2974, but, unlike those issues, is not masking a separate, root error: it is the root error.

Example

Setup

$ python --version
Python 3.6.1
$ pip list
attrs (17.4.0)
pip (9.0.1)
pluggy (0.6.0)
py (1.5.2)
pytest (3.3.1)
setuptools (38.2.5)
six (1.11.0)
wheel (0.30.0)

Note: Also fails on pytest 3.3.0. Succeeds on pytest 3.2.5.

Scenario

conftest.py

import pytest

@pytest.hookimpl(tryfirst=True)
def pytest_runtest_setup(item):
    pytest.skip("manually skipped")

test_skip.py

def test_skipped():
    assert False, "test should be skipped"

Results

$ py.test
======================================================== test session starts ========================================================
platform darwin -- Python 3.6.1, pytest-3.3.1, py-1.5.2, pluggy-0.6.0
rootdir: $HOME/src/skipping, inifile:
collected 1 item

test_skip.py
INTERNALERROR> Traceback (most recent call last):
INTERNALERROR>   File "$HOME/.envs/skipping/lib/python3.6/site-packages/_pytest/main.py", line 103, in wrap_session
INTERNALERROR>     session.exitstatus = doit(config, session) or 0
INTERNALERROR>   File "$HOME/.envs/skipping/lib/python3.6/site-packages/_pytest/main.py", line 141, in _main
INTERNALERROR>     config.hook.pytest_runtestloop(session=session)
INTERNALERROR>   File "$HOME/.envs/skipping/lib/python3.6/site-packages/pluggy/__init__.py", line 617, in __call__
INTERNALERROR>     return self._hookexec(self, self._nonwrappers + self._wrappers, kwargs)
INTERNALERROR>   File "$HOME/.envs/skipping/lib/python3.6/site-packages/pluggy/__init__.py", line 222, in _hookexec
INTERNALERROR>     return self._inner_hookexec(hook, methods, kwargs)
INTERNALERROR>   File "$HOME/.envs/skipping/lib/python3.6/site-packages/pluggy/__init__.py", line 216, in <lambda>
INTERNALERROR>     firstresult=hook.spec_opts.get('firstresult'),
INTERNALERROR>   File "$HOME/.envs/skipping/lib/python3.6/site-packages/pluggy/callers.py", line 201, in _multicall
INTERNALERROR>     return outcome.get_result()
INTERNALERROR>   File "$HOME/.envs/skipping/lib/python3.6/site-packages/pluggy/callers.py", line 76, in get_result
INTERNALERROR>     raise ex[1].with_traceback(ex[2])
INTERNALERROR>   File "$HOME/.envs/skipping/lib/python3.6/site-packages/pluggy/callers.py", line 180, in _multicall
INTERNALERROR>     res = hook_impl.function(*args)
INTERNALERROR>   File "$HOME/.envs/skipping/lib/python3.6/site-packages/_pytest/main.py", line 164, in pytest_runtestloop
INTERNALERROR>     item.config.hook.pytest_runtest_protocol(item=item, nextitem=nextitem)
INTERNALERROR>   File "$HOME/.envs/skipping/lib/python3.6/site-packages/pluggy/__init__.py", line 617, in __call__
INTERNALERROR>     return self._hookexec(self, self._nonwrappers + self._wrappers, kwargs)
INTERNALERROR>   File "$HOME/.envs/skipping/lib/python3.6/site-packages/pluggy/__init__.py", line 222, in _hookexec
INTERNALERROR>     return self._inner_hookexec(hook, methods, kwargs)
INTERNALERROR>   File "$HOME/.envs/skipping/lib/python3.6/site-packages/pluggy/__init__.py", line 216, in <lambda>
INTERNALERROR>     firstresult=hook.spec_opts.get('firstresult'),
INTERNALERROR>   File "$HOME/.envs/skipping/lib/python3.6/site-packages/pluggy/callers.py", line 201, in _multicall
INTERNALERROR>     return outcome.get_result()
INTERNALERROR>   File "$HOME/.envs/skipping/lib/python3.6/site-packages/pluggy/callers.py", line 76, in get_result
INTERNALERROR>     raise ex[1].with_traceback(ex[2])
INTERNALERROR>   File "$HOME/.envs/skipping/lib/python3.6/site-packages/pluggy/callers.py", line 180, in _multicall
INTERNALERROR>     res = hook_impl.function(*args)
INTERNALERROR>   File "$HOME/.envs/skipping/lib/python3.6/site-packages/_pytest/runner.py", line 62, in pytest_runtest_protocol
INTERNALERROR>     runtestprotocol(item, nextitem=nextitem)
INTERNALERROR>   File "$HOME/.envs/skipping/lib/python3.6/site-packages/_pytest/runner.py", line 70, in runtestprotocol
INTERNALERROR>     rep = call_and_report(item, "setup", log)
INTERNALERROR>   File "$HOME/.envs/skipping/lib/python3.6/site-packages/_pytest/runner.py", line 157, in call_and_report
INTERNALERROR>     report = hook.pytest_runtest_makereport(item=item, call=call)
INTERNALERROR>   File "$HOME/.envs/skipping/lib/python3.6/site-packages/pluggy/__init__.py", line 617, in __call__
INTERNALERROR>     return self._hookexec(self, self._nonwrappers + self._wrappers, kwargs)
INTERNALERROR>   File "$HOME/.envs/skipping/lib/python3.6/site-packages/pluggy/__init__.py", line 222, in _hookexec
INTERNALERROR>     return self._inner_hookexec(hook, methods, kwargs)
INTERNALERROR>   File "$HOME/.envs/skipping/lib/python3.6/site-packages/pluggy/__init__.py", line 216, in <lambda>
INTERNALERROR>     firstresult=hook.spec_opts.get('firstresult'),
INTERNALERROR>   File "$HOME/.envs/skipping/lib/python3.6/site-packages/pluggy/callers.py", line 196, in _multicall
INTERNALERROR>     gen.send(outcome)
INTERNALERROR>   File "$HOME/.envs/skipping/lib/python3.6/site-packages/_pytest/skipping.py", line 264, in pytest_runtest_makereport
INTERNALERROR>     elif item._skipped_by_mark and rep.skipped and type(rep.longrepr) is tuple:
INTERNALERROR> AttributeError: 'Function' object has no attribute '_skipped_by_mark'

=================================================== no tests ran in 0.01 seconds ====================================================
@pytestbot pytestbot added plugin: logging related to the logging builtin plugin type: regression indicates a problem that was introduced in a release which was working previously labels Jan 3, 2018
@pytestbot
Copy link
Contributor

GitMate.io thinks the contributor most likely able to help you is @nicoddemus.

elliterate added a commit to elliterate/capybara.py that referenced this issue Jan 3, 2018
@elliterate
Copy link
Contributor Author

The fix for this should be as simple as:

diff --git a/_pytest/skipping.py b/_pytest/skipping.py
index a1e5b438..11ddab4d 100644
--- a/_pytest/skipping.py
+++ b/_pytest/skipping.py
@@ -261,7 +261,8 @@ def pytest_runtest_makereport(item, call):
             else:
                 rep.outcome = "passed"
                 rep.wasxfail = explanation
-    elif item._skipped_by_mark and rep.skipped and type(rep.longrepr) is tuple:
+    elif hasattr(item, '_skipped_by_mark') and item._skipped_by_mark and rep.skipped and \
+            type(rep.longrepr) is tuple:
         # skipped by mark.skipif; change the location of the failure
         # to point to the item definition, otherwise it will display
         # the location of where the skip exception was raised within pytest

I'd be happy to submit a pull request, but I was unsure what if any test was appropriate. This seems like a mistake that any hook could make at any time and thus isn't really specific to the skipping plugin.

@RonnyPfannschmidt
Copy link
Member

for better consistency i propose the use of getattr(item, '_skipped_by_mark', False)

@RonnyPfannschmidt
Copy link
Member

for documentative purposes, https://github.com/pytest-dev/pytest/pull/3075/files#diff-9696819f160fbd3aa8ce9363c38990e9L172 is the line that triggers the issue - the try first hook that skips simply happens before the other call - that detail was my oversight back when switching the variable names

we might want to evaluate the code-base for more such pitfalls in future

elliterate added a commit to elliterate/capybara.py that referenced this issue Jan 31, 2018
elliterate added a commit to elliterate/capybara.py that referenced this issue Jan 31, 2018
elliterate added a commit to elliterate/capybara.py that referenced this issue Feb 1, 2018
elliterate added a commit to elliterate/capybara.py that referenced this issue Feb 1, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
plugin: logging related to the logging builtin plugin type: regression indicates a problem that was introduced in a release which was working previously
Projects
None yet
Development

No branches or pull requests

3 participants