Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Mdd module unit test docs #31373

Merged
merged 14 commits into from
Oct 19, 2017
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
35 changes: 26 additions & 9 deletions docs/docsite/rst/dev_guide/testing.rst
Original file line number Diff line number Diff line change
Expand Up @@ -33,9 +33,14 @@ At a high level we have the following classifications of tests:
* Tests directly against individual parts of the code base.


If you're a developer, one of the most valuable things you can do is look at the GitHub issues list and help fix bugs. We almost always prioritize bug fixing over feature development, so helping to fix bugs is one of the best things you can do.
If you're a developer, one of the most valuable things you can do is look at the GitHub
issues list and help fix bugs. We almost always prioritize bug fixing over feature
development.

Even if you're not a developer, helping to test pull requests for bug fixes and features is still immensely valuable.
Even for non developers, helping to test pull requests for bug fixes and features is still
immensely valuable. Ansible users who understand writing playbooks and roles should be
able to add integration tests and so Github pull requests with integration tests that show
bugs in action will also be a great way to help.


Testing within GitHub & Shippable
Expand All @@ -47,7 +52,6 @@ Organization

When Pull Requests (PRs) are created they are tested using Shippable, a Continuous Integration (CI) tool. Results are shown at the end of every PR.


When Shippable detects an error and it can be linked back to a file that has been modified in the PR then the relevant lines will be added as a GitHub comment. For example::

The test `ansible-test sanity --test pep8` failed with the following errors:
Expand All @@ -69,7 +73,6 @@ Then run the tests detailed in the GitHub comment::
ansible-test sanity --test pep8
ansible-test sanity --test validate-modules


If there isn't a GitHub comment stating what's failed you can inspect the results by clicking on the "Details" button under the "checks have failed" message at the end of the PR.

Rerunning a failing CI job
Expand All @@ -91,10 +94,6 @@ If the issue persists, please contact us in ``#ansible-devel`` on Freenode IRC.
How to test a PR
================

If you're a developer, one of the most valuable things you can do is look at the GitHub issues list and help fix bugs. We almost always prioritize bug fixing over feature development, so helping to fix bugs is one of the best things you can do.

Even if you're not a developer, helping to test pull requests for bug fixes and features is still immensely valuable.

Ideally, code should add tests that prove that the code works. That's not always possible and tests are not always comprehensive, especially when a user doesn't have access to a wide variety of platforms, or is using an API or web service. In these cases, live testing against real equipment can be more valuable than automation that runs against simulated interfaces. In any case, things should always be tested manually the first time as well.

Thankfully, helping to test Ansible is pretty straightforward, assuming you are familiar with how Ansible works.
Expand Down Expand Up @@ -188,8 +187,26 @@ If the PR does not resolve the issue, or if you see any failures from the unit/i
| some other output
| \```

Code Coverage Online
````````````````````

`The online code coverage reports <https://codecov.io/gh/ansible/ansible>` are a good way
to identify areas for testing improvement in Ansible. By following red colors you can
drill down through the reports to find files which have no tests at all. Adding both
integration and unit tests which show clearly how code should work, verify important
Ansible functions and increase testing coverage in areas where there is none is a valuable
way to help improve Ansible.

The code coverage reports only cover the ``devel`` branch of Ansible where new feature
development takes place. Pull requests and new code will be missing from the codecov.io
coverage reports so local reporting is needed. Most ``ansible-test`` commands allow you
to collect code coverage, this is particularly useful to indicate where to extend
testing. See :doc:`testing_running_locally` for more information.


Want to know more about testing?
================================

If you'd like to know more about the plans for improving testing Ansible then why not join the `Testing Working Group <https://github.com/ansible/community/blob/master/meetings/README.md>`_.
If you'd like to know more about the plans for improving testing Ansible then why not join the
`Testing Working Group <https://github.com/ansible/community/blob/master/meetings/README.md>`_.

19 changes: 17 additions & 2 deletions docs/docsite/rst/dev_guide/testing_running_locally.rst
Original file line number Diff line number Diff line change
Expand Up @@ -45,12 +45,27 @@ Use the ``ansible-test shell`` command to get an interactive shell in the same e
Code Coverage
=============

Add the ``--coverage`` option to any test command to collect code coverage data.
Code coverage reports make it easy to identify untested code for which more tests should
be written. Online reports are available but only cover the ``devel`` branch (see
:doc:`testing`). For new code local reports are needed.

Add the ``--coverage`` option to any test command to collect code coverage data. If you
aren't using the ``--tox`` or ``--docker`` options which create an isolated python
environment then you may have to use the ``--requirements`` option to ensure that the
correct version of the coverage module is installed

ansible-test units --coverage apt
ansible-test integration --coverage aws_lambda --tox --requirements
ansible-test coverage html


Reports can be generated in several different formats:

* ``ansible-test coverage report`` - Console report.
* ``ansible-test coverage html`` - HTML report.
* ``ansible-test coverage xml`` - XML report.

To clear data between test runs, use the ``ansible-test coverage erase`` command.
To clear data between test runs, use the ``ansible-test coverage erase`` command. For a full list of features see the online help::

ansible-test coverage --help

133 changes: 118 additions & 15 deletions docs/docsite/rst/dev_guide/testing_units.rst
Original file line number Diff line number Diff line change
Expand Up @@ -2,19 +2,24 @@
Unit Tests
**********

Unit tests are small isolated tests that target a specific library or module.
Unit tests are small isolated tests that target a specific library or module. Unit tests
in Ansible are currently the only way of driving tests from python within Ansible's
continuous integration process. This means that in some circumstances the tests may be a
bit wider than just units.

.. contents:: Topics

Available Tests
===============

Unit tests can be found in `test/units <https://github.com/ansible/ansible/tree/devel/test/units>`_, notice that the directory structure matches that of ``lib/ansible/``
Unit tests can be found in `test/units
<https://github.com/ansible/ansible/tree/devel/test/units>`_. Notice that the directory
structure of the tests matches that of ``lib/ansible/``.

Running Tests
=============

Unit tests can be run across the whole code base by doing:
The Ansible unit tests can be run across the whole code base by doing:

.. code:: shell

Expand All @@ -35,18 +40,22 @@ Or against a specific Python version by doing:
ansible-test units --tox --python 2.7 apt



For advanced usage see the online help::

ansible-test units --help

You can also run tests in Ansible's continuous integration system by opening a pull
request. This will automatically determine which tests to run based on the changes made
in your pull request.


Installing dependencies
=======================

``ansible-test`` has a number of dependencies , for ``units`` tests we suggest using ``tox``
``ansible-test`` has a number of dependencies. For ``units`` tests we suggest using ``tox``.

The dependencies can be installed using the ``--requirements`` argument, which will install all the required dependencies needed for unit tests. For example:
The dependencies can be installed using the ``--requirements`` argument, which will
install all the required dependencies needed for unit tests. For example:

.. code:: shell

Expand All @@ -58,7 +67,11 @@ The dependencies can be installed using the ``--requirements`` argument, which w
When using ``ansible-test`` with ``--tox`` requires tox >= 2.5.0


The full list of requirements can be found at `test/runner/requirements <https://github.com/ansible/ansible/tree/devel/test/runner/requirements>`_. Requirements files are named after their respective commands. See also the `constraints <https://github.com/ansible/ansible/blob/devel/test/runner/requirements/constraints.txt>`_ applicable to all commands.
The full list of requirements can be found at `test/runner/requirements
<https://github.com/ansible/ansible/tree/devel/test/runner/requirements>`_. Requirements
files are named after their respective commands. See also the `constraints
<https://github.com/ansible/ansible/blob/devel/test/runner/requirements/constraints.txt>`_
applicable to all commands.


Extending unit tests
Expand All @@ -67,22 +80,98 @@ Extending unit tests

.. warning:: What a unit test isn't

If you start writing a test that requires external services then you may be writing an integration test, rather than a unit test.
If you start writing a test that requires external services then
you may be writing an integration test, rather than a unit test.


Structuring Unit Tests
``````````````````````

Ansible drives unit tests through `pytest <https://docs.pytest.org/en/latest/>`_. This
means that tests can either be written a simple functions which are included in any file
name like ``test_<something>.py`` or as classes.

Here is an example of a function::

#this function will be called simply because it is called test_*()

def test_add()
a = 10
b = 23
c = 33
assert a + b = c

Here is an example of a class::

import unittest:

class AddTester(unittest.TestCase)

def SetUp()
self.a = 10
self.b = 23

# this function will
def test_add()
c = 33
assert self.a + self.b = c

# this function will
def test_subtract()
c = -13
assert self.a - self.b = c

Both methods work fine in most circumstances; the function-based interface is simpler and
quicker and so that's probably where you should start when you are just trying to add a
few basic tests for a module. The class-based test allows more tidy set up and tear down
of pre-requisites, so if you have many test cases for your module you may want to refactor
to use that.

Assertions using the simple ``assert`` function inside the tests will give give full
information on the cause of the failure with a trace-back of functions called during the
assertion. This means that plain asserts are recommended over other external assertion
libraries.

A number of the unit test suites include functions that are shared between several
modules, especially in the networking arena. In these cases a file is created in the same
directory, which is then included directly.


Module test case common code
````````````````````````````

Keep common code as specific as possible within the `test/units/` directory structure. For
example, if it's specific to testing Amazon modules, it should be in
`test/units/modules/cloud/amazon/`. Don't import common unit test code from directories
outside the current or parent directories.

Don't import other unit tests from a unit test. Any common code should be in dedicated
files that aren't themselves tests.


Fixtures files
``````````````

To mock out fetching results from devices, you can use ``fixtures`` to read in pre-generated data.
To mock out fetching results from devices, or provide other complex datastructures that
come from external libraries, you can use ``fixtures`` to read in pre-generated data.

Text files live in ``test/units/modules/network/PLATFORM/fixtures/``

Data is loaded using the ``load_fixture`` method

See `eos_banner test <https://github.com/ansible/ansible/blob/devel/test/units/modules/network/eos/test_eos_banner.py>`_ for a practical example.
See `eos_banner test
<https://github.com/ansible/ansible/blob/devel/test/units/modules/network/eos/test_eos_banner.py>`_
for a practical example.

If you are simulating APIs you may find that python placebo is useful. See
doc:`testing_units_modules` for more information.


Code Coverage
`````````````
Most ``ansible-test`` commands allow you to collect code coverage, this is particularly useful when to indicate where to extend testing.
Code Coverage For New or Updated Unit Tests
```````````````````````````````````````````
New code will be missing from the codecov.io coverage reports (see :doc:`testing`), so
local reporting is needed. Most ``ansible-test`` commands allow you to collect code
coverage; this is particularly useful when to indicate where to extend testing.

To collect coverage data add the ``--coverage`` argument to your ``ansible-test`` command line:

Expand All @@ -99,7 +188,21 @@ Reports can be generated in several different formats:
* ``ansible-test coverage html`` - HTML report.
* ``ansible-test coverage xml`` - XML report.

To clear data between test runs, use the ``ansible-test coverage erase`` command. For a full list of features see the online help::
To clear data between test runs, use the ``ansible-test coverage erase`` command. See
:doc:`testing_units_running_locally` for more information about generating coverage
reports.


.. seealso::

ansible-test coverage --help
:doc:`testing_units_modules`
Special considerations for unit testing modules
:doc:`testing_running_locally`
Running tests locally including gathering and reporting coverage data
`Python 3 documentation - 26.4. unittest — Unit testing framework <https://docs.python.org/3/library/unittest.html>`_
The documentation of the unittest framework in python 3
`Python 2 documentation - 25.3. unittest — Unit testing framework <https://docs.python.org/3/library/unittest.html>`_
The documentation of the earliest supported unittest framework - from Python 2.6
`pytest: helps you write better programs <https://docs.pytest.org/en/latest/>`_
The documentation of pytest - the framework actually used to run Ansible unit tests