Skip to content

Commit

Permalink
Merge pull request #2892 from nicoddemus/merge-master-into-features
Browse files Browse the repository at this point in the history
Merge upstream/master into features
  • Loading branch information
RonnyPfannschmidt committed Nov 4, 2017
2 parents d7e8eee + f3a119c commit b18a9de
Show file tree
Hide file tree
Showing 19 changed files with 215 additions and 59 deletions.
1 change: 1 addition & 0 deletions AUTHORS
Expand Up @@ -47,6 +47,7 @@ Dave Hunt
David Díaz-Barquero
David Mohr
David Vierra
Daw-Ran Liou
Denis Kirisov
Diego Russo
Dmitry Dygalo
Expand Down
16 changes: 11 additions & 5 deletions _pytest/assertion/rewrite.py
Expand Up @@ -591,23 +591,26 @@ def run(self, mod):
# docstrings and __future__ imports.
aliases = [ast.alias(py.builtin.builtins.__name__, "@py_builtins"),
ast.alias("_pytest.assertion.rewrite", "@pytest_ar")]
expect_docstring = True
doc = getattr(mod, "docstring", None)
expect_docstring = doc is None
if doc is not None and self.is_rewrite_disabled(doc):
return
pos = 0
lineno = 0
lineno = 1
for item in mod.body:
if (expect_docstring and isinstance(item, ast.Expr) and
isinstance(item.value, ast.Str)):
doc = item.value.s
if "PYTEST_DONT_REWRITE" in doc:
# The module has disabled assertion rewriting.
if self.is_rewrite_disabled(doc):
return
lineno += len(doc) - 1
expect_docstring = False
elif (not isinstance(item, ast.ImportFrom) or item.level > 0 or
item.module != "__future__"):
lineno = item.lineno
break
pos += 1
else:
lineno = item.lineno
imports = [ast.Import([alias], lineno=lineno, col_offset=0)
for alias in aliases]
mod.body[pos:pos] = imports
Expand All @@ -633,6 +636,9 @@ def run(self, mod):
not isinstance(field, ast.expr)):
nodes.append(field)

def is_rewrite_disabled(self, docstring):
return "PYTEST_DONT_REWRITE" in docstring

def variable(self):
"""Get a new variable."""
# Use a character invalid in python identifiers to avoid clashing.
Expand Down
2 changes: 1 addition & 1 deletion _pytest/doctest.py
Expand Up @@ -127,7 +127,7 @@ def repr_failure(self, excinfo):
lines = ["%03d %s" % (i + test.lineno + 1, x)
for (i, x) in enumerate(lines)]
# trim docstring error lines to 10
lines = lines[example.lineno - 9:example.lineno + 1]
lines = lines[max(example.lineno - 9, 0):example.lineno + 1]
else:
lines = ['EXAMPLE LOCATION UNKNOWN, not showing all tests of that example']
indent = '>>>'
Expand Down
5 changes: 3 additions & 2 deletions _pytest/mark.py
Expand Up @@ -293,8 +293,9 @@ def _check(self, name):
pass
self._markers = l = set()
for line in self._config.getini("markers"):
beginning = line.split(":", 1)
x = beginning[0].split("(", 1)[0]
marker, _ = line.split(":", 1)
marker = marker.rstrip()
x = marker.split("(", 1)[0]
l.add(x)
if name not in self._markers:
raise AttributeError("%r not a registered marker" % (name,))
Expand Down
1 change: 1 addition & 0 deletions changelog/1505.doc
@@ -0,0 +1 @@
Introduce a dedicated section about conftest.py.
1 change: 1 addition & 0 deletions changelog/2658.doc
@@ -0,0 +1 @@
Append example for pytest.param in the example/parametrize document.
1 change: 1 addition & 0 deletions changelog/2856.bugfix
@@ -0,0 +1 @@
Strip whitespace from marker names when reading them from INI config.
1 change: 1 addition & 0 deletions changelog/2882.bugfix
@@ -0,0 +1 @@
Show full context of doctest source in the pytest output, if the lineno of failed example in the docstring is < 9.
4 changes: 2 additions & 2 deletions doc/en/assert.rst
Expand Up @@ -209,8 +209,8 @@ the ``pytest_assertrepr_compare`` hook.
.. autofunction:: _pytest.hookspec.pytest_assertrepr_compare
:noindex:

As an example consider adding the following hook in a conftest.py which
provides an alternative explanation for ``Foo`` objects::
As an example consider adding the following hook in a :ref:`conftest.py <conftest.py>`
file which provides an alternative explanation for ``Foo`` objects::

# content of conftest.py
from test_foocompare import Foo
Expand Down
50 changes: 50 additions & 0 deletions doc/en/example/parametrize.rst
Expand Up @@ -485,4 +485,54 @@ of our ``test_func1`` was skipped. A few notes:
values as well.


Set marks or test ID for individual parametrized test
--------------------------------------------------------------------

Use ``pytest.param`` to apply marks or set test ID to individual parametrized test.
For example::

# content of test_pytest_param_example.py
import pytest
@pytest.mark.parametrize('test_input,expected', [
('3+5', 8),
pytest.param('1+7', 8,
marks=pytest.mark.basic),
pytest.param('2+4', 6,
marks=pytest.mark.basic,
id='basic_2+4'),
pytest.param('6*9', 42,
marks=[pytest.mark.basic, pytest.mark.xfail],
id='basic_6*9'),
])
def test_eval(test_input, expected):
assert eval(test_input) == expected
In this example, we have 4 parametrized tests. Except for the first test,
we mark the rest three parametrized tests with the custom marker ``basic``,
and for the fourth test we also use the built-in mark ``xfail`` to indicate this
test is expected to fail. For explicitness, we set test ids for some tests.

Then run ``pytest`` with verbose mode and with only the ``basic`` marker::

pytest -v -m basic
============================================ test session starts =============================================
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
rootdir: $REGENDOC_TMPDIR, inifile:
collected 4 items

test_pytest_param_example.py::test_eval[1+7-8] PASSED
test_pytest_param_example.py::test_eval[basic_2+4] PASSED
test_pytest_param_example.py::test_eval[basic_6*9] xfail
========================================== short test summary info ===========================================
XFAIL test_pytest_param_example.py::test_eval[basic_6*9]

============================================= 1 tests deselected =============================================

As the result:

- Four tests were collected
- One test was deselected because it doesn't have the ``basic`` mark.
- Three tests with the ``basic`` mark was selected.
- The test ``test_eval[1+7-8]`` passed, but the name is autogenerated and confusing.
- The test ``test_eval[basic_2+4]`` passed.
- The test ``test_eval[basic_6*9]`` was expected to fail and did fail.
30 changes: 16 additions & 14 deletions doc/en/example/pythoncollection.rst
Expand Up @@ -175,21 +175,23 @@ You can always peek at the collection tree without running tests like this::
======= no tests ran in 0.12 seconds ========

customizing test collection to find all .py files
---------------------------------------------------------
.. _customizing-test-collection:

.. regendoc:wipe
Customizing test collection
---------------------------

You can easily instruct ``pytest`` to discover tests from every python file::
.. regendoc:wipe
You can easily instruct ``pytest`` to discover tests from every Python file::

# content of pytest.ini
[pytest]
python_files = *.py

However, many projects will have a ``setup.py`` which they don't want to be imported. Moreover, there may files only importable by a specific python version.
For such cases you can dynamically define files to be ignored by listing
them in a ``conftest.py`` file::
However, many projects will have a ``setup.py`` which they don't want to be
imported. Moreover, there may files only importable by a specific python
version. For such cases you can dynamically define files to be ignored by
listing them in a ``conftest.py`` file::

# content of conftest.py
import sys
Expand All @@ -198,7 +200,7 @@ them in a ``conftest.py`` file::
if sys.version_info[0] > 2:
collect_ignore.append("pkg/module_py2.py")

And then if you have a module file like this::
and then if you have a module file like this::

# content of pkg/module_py2.py
def test_only_on_python2():
Expand All @@ -207,13 +209,13 @@ And then if you have a module file like this::
except Exception, e:
pass

and a setup.py dummy file like this::
and a ``setup.py`` dummy file like this::

# content of setup.py
0/0 # will raise exception if imported

then a pytest run on Python2 will find the one test and will leave out the
setup.py file::
If you run with a Python 2 interpreter then you will find the one test and will
leave out the ``setup.py`` file::

#$ pytest --collect-only
====== test session starts ======
Expand All @@ -225,13 +227,13 @@ setup.py file::

====== no tests ran in 0.04 seconds ======

If you run with a Python3 interpreter both the one test and the setup.py file
will be left out::
If you run with a Python 3 interpreter both the one test and the ``setup.py``
file will be left out::

$ pytest --collect-only
======= test session starts ========
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
rootdir: $REGENDOC_TMPDIR, inifile: pytest.ini
collected 0 items

======= no tests ran in 0.12 seconds ========
44 changes: 31 additions & 13 deletions doc/en/fixture.rst
Expand Up @@ -127,10 +127,39 @@ It's a prime example of `dependency injection`_ where fixture
functions take the role of the *injector* and test functions are the
*consumers* of fixture objects.

.. _`conftest.py`:
.. _`conftest`:

``conftest.py``: sharing fixture functions
------------------------------------------

If during implementing your tests you realize that you
want to use a fixture function from multiple test files you can move it
to a ``conftest.py`` file.
You don't need to import the fixture you want to use in a test, it
automatically gets discovered by pytest. The discovery of
fixture functions starts at test classes, then test modules, then
``conftest.py`` files and finally builtin and third party plugins.

You can also use the ``conftest.py`` file to implement
:ref:`local per-directory plugins <conftest.py plugins>`.

Sharing test data
-----------------

If you want to make test data from files available to your tests, a good way
to do this is by loading these data in a fixture for use by your tests.
This makes use of the automatic caching mechanisms of pytest.

Another good approach is by adding the data files in the ``tests`` folder.
There are also community plugins available to help managing this aspect of
testing, e.g. `pytest-datadir <https://github.com/gabrielcnr/pytest-datadir>`__
and `pytest-datafiles <https://pypi.python.org/pypi/pytest-datafiles>`__.

.. _smtpshared:

Scope: Sharing a fixture across tests in a class, module or session
-------------------------------------------------------------------
Scope: sharing a fixture instance across tests in a class, module or session
----------------------------------------------------------------------------

.. regendoc:wipe
Expand Down Expand Up @@ -878,17 +907,6 @@ All test methods in this TestClass will use the transaction fixture while
other test classes or functions in the module will not use it unless
they also add a ``transact`` reference.


Shifting (visibility of) fixture functions
----------------------------------------------------

If during implementing your tests you realize that you
want to use a fixture function from multiple test files you can move it
to a :ref:`conftest.py <conftest.py>` file or even separately installable
:ref:`plugins <plugins>` without changing test code. The discovery of
fixtures functions starts at test classes, then test modules, then
``conftest.py`` files and finally builtin and third party plugins.

Overriding fixtures on various levels
-------------------------------------

Expand Down
3 changes: 1 addition & 2 deletions doc/en/plugins.rst
Expand Up @@ -91,7 +91,7 @@ environment you can type::

and will get an extended test header which shows activated plugins
and their names. It will also print local plugins aka
:ref:`conftest.py <conftest>` files when they are loaded.
:ref:`conftest.py <conftest.py plugins>` files when they are loaded.

.. _`cmdunregister`:

Expand Down Expand Up @@ -152,4 +152,3 @@ in the `pytest repository <https://github.com/pytest-dev/pytest>`_.
_pytest.terminal
_pytest.tmpdir
_pytest.unittest

12 changes: 11 additions & 1 deletion doc/en/skipping.rst
Expand Up @@ -3,7 +3,7 @@
.. _skipping:

Skip and xfail: dealing with tests that cannot succeed
=====================================================================
======================================================

You can mark test functions that cannot be run on certain platforms
or that you expect to fail so pytest can deal with them accordingly and
Expand Down Expand Up @@ -152,6 +152,16 @@ will be skipped if any of the skip conditions is true.
.. _`whole class- or module level`: mark.html#scoped-marking


Skipping files or directories
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Sometimes you may need to skip an entire file or directory, for example if the
tests rely on Python version-specific features or contain code that you do not
wish pytest to run. In this case, you must exclude the files and directories
from collection. Refer to :ref:`customizing-test-collection` for more
information.


Skipping on a missing import dependency
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Expand Down
2 changes: 0 additions & 2 deletions doc/en/writing_plugins.rst
Expand Up @@ -57,9 +57,7 @@ Plugin discovery order at tool startup

.. _`pytest/plugin`: http://bitbucket.org/pytest-dev/pytest/src/tip/pytest/plugin/
.. _`conftest.py plugins`:
.. _`conftest.py`:
.. _`localplugin`:
.. _`conftest`:
.. _`local conftest plugins`:

conftest.py: local per-directory plugins
Expand Down

0 comments on commit b18a9de

Please sign in to comment.