Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

test failures ... #339

Closed
cehbrecht opened this issue Aug 2, 2016 · 17 comments
Closed

test failures ... #339

cehbrecht opened this issue Aug 2, 2016 · 17 comments
Labels
Milestone

Comments

@cehbrecht
Copy link
Contributor

most of the time there are tests which fail due to not available online resources. This makes it hard to see if a patch causes a failure ... you need to check all the tests.

When using pytest one can mark for example long running tests as "slow" or those with external resources as "online":

http://docs.pytest.org/en/latest/mark.html

Tests which are currently broken and can not be fixed could be skipped:

http://docs.pytest.org/en/latest/skipping.html

Should we start using these markers?

@cehbrecht
Copy link
Contributor Author

shouldn't we start to fix the test suite?

https://travis-ci.org/geopython/OWSLib/builds

@johanvdw
Copy link
Member

I'm also in favor of switching to pytest, and willing to spend some time doing so.

Any objections?

@tomkralidis
Copy link
Member

Thanks for the info. Comments:

  • we are already using pytest as our test suite
  • what value does skipping tests provide over simply removing them?

@cehbrecht
Copy link
Contributor Author

if you don't need the test you can remove it. But sometimes a test is broken, we would like to keep it, but we can't fix it because it uses an external service which is down, or it is just not the module we feel responsible for. By skipping it, we don't block the tests ... still have a reminder and the chance to fix it.

pytest is calling the doctests ... don't know how to add the skip decorator here.

@kwilcox
Copy link
Member

kwilcox commented Nov 30, 2017

We have needed to run or mock the services used in tests for a long time.

How about we mark them as "remote", setup travis to run "-m remote" and "-m not remote" and allow failures on the "-m remote" tests?

@cehbrecht
Copy link
Contributor Author

... I like the "remote" tag ... does someone want to make an example?

@johanvdw
Copy link
Member

johanvdw commented Dec 4, 2017

Here is a simple example:

johanvdw@8beb313

The results should look like:

johan@x1:~/git/OWSLib$ py.test
========================================== test session starts ==========================================
platform linux2 -- Python 2.7.13, pytest-3.0.6, py-1.5.2, pluggy-0.4.0 -- /usr/bin/python
cachedir: .cache
rootdir: /home/johan/git/OWSLib, inifile: tox.ini
plugins: hypothesis-3.33.0
collected 1 items 

tests/test_sos.py::TestSos::test_content PASSED

======================================= 1 passed in 0.31 seconds ========================================
johan@x1:~/git/OWSLib$ py.test -m "not remote"
========================================== test session starts ==========================================
platform linux2 -- Python 2.7.13, pytest-3.0.6, py-1.5.2, pluggy-0.4.0 -- /usr/bin/python
cachedir: .cache
rootdir: /home/johan/git/OWSLib, inifile: tox.ini
plugins: hypothesis-3.33.0
collected 1 items 

========================================== 1 tests deselected ===========================================
===================================== 1 deselected in 0.12 seconds ======================================
johan@x1:~/git/OWSLib$ py.test -m remote
========================================== test session starts ==========================================
platform linux2 -- Python 2.7.13, pytest-3.0.6, py-1.5.2, pluggy-0.4.0 -- /usr/bin/python
cachedir: .cache
rootdir: /home/johan/git/OWSLib, inifile: tox.ini
plugins: hypothesis-3.33.0
collected 1 items 

tests/test_sos.py::TestSos::test_content PASSED

======================================= 1 passed in 0.27 seconds ========================================
johan@x1:~/git/OWSLib$ 

Note that I'm mostly a fan of changing the tests in the tests/doctests directory to normal tests. I find them easier to maintain (eg codecompletion/highlighting). It also gives more possibilities.

Doctests are fine for checking that the examples in documentation still work fine, but IMHO they should not be used for unit tests.

Note that in the example commit I dropped quite some options from tox to make the example clearer. Don't consider this a pull request. We should definitely keep using the old tests until they are replaced.

@cehbrecht
Copy link
Contributor Author

I also prefer to use normal tests instead of doctests. We could do this for new tests and have a slow transition over the time. I manly use the WPS module ... so I could care about the WPS test suite.

To get a green light on travis we need to figure out how to add a pytest marker to those tests which are not working ... or we need to comment, remove or replace them.

@johanvdw
Copy link
Member

johanvdw commented Dec 5, 2017

Maybe an even better way would be using pytest.mark.skipif after a basic test:
eg

   @pytest.mark.skipif(
            urllib.urlopen("http://sensorweb.demo.52north.org/nonexisting").getcode() != 200,
            reason= "website unavailable")
    def test_existing(self):
        service = SensorObservationService('http://sensorweb.demo.52north.org/52n-sos-webapp/sos/        kvp',       version='2.0.0')
        assert "http://www.52north.org/test/offering/1" in service.contents

Which would lead to:

johan@x1:~/git/OWSLib$ py.test
========================================== test session starts ==========================================
platform linux2 -- Python 2.7.13, pytest-3.0.6, py-1.5.2, pluggy-0.4.0 -- /usr/bin/python
cachedir: .cache
rootdir: /home/johan/git/OWSLib, inifile: tox.ini
plugins: hypothesis-3.33.0
collected 2 items 

tests/test_sos.py::TestSos::test_content PASSED
tests/test_sos.py::TestSos::test_existing SKIPPED
======================================== short test summary info ========================================
SKIP [1] tests/test_sos.py:20: website unavailable

@cehbrecht
Copy link
Contributor Author

skipif sounds good 👍

@cehbrecht
Copy link
Contributor Author

I have rewritten a doctest using pytest skipif:
https://github.com/bird-house/OWSLib/blob/fix-test-csw-ngdc/tests/test_csw_ngdc.py

Try with:

$ pytest tests/test_csw_ngdc.py

Previous doctest:
https://github.com/bird-house/OWSLib/blob/fix-test-csw-ngdc/tests/doctests/csw_ngdc.txt

Still I don't know how to skip doctests, see links:

Shall we start to rewrite the tests? Should they go into a new test subfolder?

For those (failing) tests we don't want to rewrite ... but still keep them for information ... should we comment them and print a reminder?

@johanvdw
Copy link
Member

As @tomkralidis seems to be the current maintainer, it would be nice to hear his opinion.

I'm pro migrating tests to the new system. For the currently failing tests I think we should find out whether that's temporary or replace them with different servers.

Flagging online tests is useful for packaging in Debian. There it is required that a package builds without online access (which means I excluded tests in the build process). For owslib one could argue that not going online is a bit weird (as the goal of the library is using online services), so skipping just the tests for sites that are unavailable seems good as well.

@tomkralidis
Copy link
Member

Hi all: thanks for the valuable analysis/assessment. +1 with moving tests to normal tests. Or course we need to ensure that all remote tests are not always all skipped (i.e. shifting the problem elsewhere). As we know the root cause of the failures are the nature of the goal of OWSLib :)

Are there any thoughts on using mocks? This would require some work to setup mocks but then we have no dependency on external resources, ever. I'm willing to sprint on this if others are.

Having said this, I'm +1 on moving to normal tests regardless. Can we move with that as a first step?

@cehbrecht
Copy link
Contributor Author

I have made a PR #407 with some converted doctests.

I personally feel responsible for the wps module ... that is the one I'm currently using. When the doctests conversion is accepted I would convert the WPS tests as my next step. I can also look into mocking a WPS service.

@tomkralidis
Copy link
Member

Thanks @cehbrecht. I'll commit to the remaining CSW and ISO tests.

@cehbrecht cehbrecht added this to the 0.17.0 milestone Mar 20, 2018
@cehbrecht
Copy link
Contributor Author

To get a green light on travis again I have moved broken tests to the tests/_broken folder which is skipped by the test-run on travis (#424). One can still test them locally and we can fix them if they are valuable.

We have also started to convert tests to "normal" tests and make use of pytest.mark.skipif to skip a test if the service is currently not available.

There is also pytest flaky extension to skip test which sometimes do not work. Might help us on the transition.

There is more work waiting to get the test-suite in good shape again.

@cehbrecht
Copy link
Contributor Author

I close this one for now. Tests are working most of the time. There is more work to do but this can happen continuously.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

4 participants