New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

test failures ... #339

Closed
cehbrecht opened this Issue Aug 2, 2016 · 17 comments

Comments

Projects
None yet
4 participants
@cehbrecht
Contributor

cehbrecht commented Aug 2, 2016

most of the time there are tests which fail due to not available online resources. This makes it hard to see if a patch causes a failure ... you need to check all the tests.

When using pytest one can mark for example long running tests as "slow" or those with external resources as "online":

http://docs.pytest.org/en/latest/mark.html

Tests which are currently broken and can not be fixed could be skipped:

http://docs.pytest.org/en/latest/skipping.html

Should we start using these markers?

@cehbrecht

This comment has been minimized.

Show comment
Hide comment
@cehbrecht

cehbrecht Apr 7, 2017

Contributor

shouldn't we start to fix the test suite?

https://travis-ci.org/geopython/OWSLib/builds

Contributor

cehbrecht commented Apr 7, 2017

shouldn't we start to fix the test suite?

https://travis-ci.org/geopython/OWSLib/builds

@johanvdw

This comment has been minimized.

Show comment
Hide comment
@johanvdw

johanvdw Nov 28, 2017

Member

I'm also in favor of switching to pytest, and willing to spend some time doing so.

Any objections?

Member

johanvdw commented Nov 28, 2017

I'm also in favor of switching to pytest, and willing to spend some time doing so.

Any objections?

@tomkralidis

This comment has been minimized.

Show comment
Hide comment
@tomkralidis

tomkralidis Nov 29, 2017

Member

Thanks for the info. Comments:

  • we are already using pytest as our test suite
  • what value does skipping tests provide over simply removing them?
Member

tomkralidis commented Nov 29, 2017

Thanks for the info. Comments:

  • we are already using pytest as our test suite
  • what value does skipping tests provide over simply removing them?
@cehbrecht

This comment has been minimized.

Show comment
Hide comment
@cehbrecht

cehbrecht Nov 29, 2017

Contributor

if you don't need the test you can remove it. But sometimes a test is broken, we would like to keep it, but we can't fix it because it uses an external service which is down, or it is just not the module we feel responsible for. By skipping it, we don't block the tests ... still have a reminder and the chance to fix it.

pytest is calling the doctests ... don't know how to add the skip decorator here.

Contributor

cehbrecht commented Nov 29, 2017

if you don't need the test you can remove it. But sometimes a test is broken, we would like to keep it, but we can't fix it because it uses an external service which is down, or it is just not the module we feel responsible for. By skipping it, we don't block the tests ... still have a reminder and the chance to fix it.

pytest is calling the doctests ... don't know how to add the skip decorator here.

@kwilcox

This comment has been minimized.

Show comment
Hide comment
@kwilcox

kwilcox Nov 30, 2017

Member

We have needed to run or mock the services used in tests for a long time.

How about we mark them as "remote", setup travis to run "-m remote" and "-m not remote" and allow failures on the "-m remote" tests?

Member

kwilcox commented Nov 30, 2017

We have needed to run or mock the services used in tests for a long time.

How about we mark them as "remote", setup travis to run "-m remote" and "-m not remote" and allow failures on the "-m remote" tests?

@cehbrecht

This comment has been minimized.

Show comment
Hide comment
@cehbrecht

cehbrecht Dec 4, 2017

Contributor

... I like the "remote" tag ... does someone want to make an example?

Contributor

cehbrecht commented Dec 4, 2017

... I like the "remote" tag ... does someone want to make an example?

@johanvdw

This comment has been minimized.

Show comment
Hide comment
@johanvdw

johanvdw Dec 4, 2017

Member

Here is a simple example:

johanvdw@8beb313

The results should look like:

johan@x1:~/git/OWSLib$ py.test
========================================== test session starts ==========================================
platform linux2 -- Python 2.7.13, pytest-3.0.6, py-1.5.2, pluggy-0.4.0 -- /usr/bin/python
cachedir: .cache
rootdir: /home/johan/git/OWSLib, inifile: tox.ini
plugins: hypothesis-3.33.0
collected 1 items 

tests/test_sos.py::TestSos::test_content PASSED

======================================= 1 passed in 0.31 seconds ========================================
johan@x1:~/git/OWSLib$ py.test -m "not remote"
========================================== test session starts ==========================================
platform linux2 -- Python 2.7.13, pytest-3.0.6, py-1.5.2, pluggy-0.4.0 -- /usr/bin/python
cachedir: .cache
rootdir: /home/johan/git/OWSLib, inifile: tox.ini
plugins: hypothesis-3.33.0
collected 1 items 

========================================== 1 tests deselected ===========================================
===================================== 1 deselected in 0.12 seconds ======================================
johan@x1:~/git/OWSLib$ py.test -m remote
========================================== test session starts ==========================================
platform linux2 -- Python 2.7.13, pytest-3.0.6, py-1.5.2, pluggy-0.4.0 -- /usr/bin/python
cachedir: .cache
rootdir: /home/johan/git/OWSLib, inifile: tox.ini
plugins: hypothesis-3.33.0
collected 1 items 

tests/test_sos.py::TestSos::test_content PASSED

======================================= 1 passed in 0.27 seconds ========================================
johan@x1:~/git/OWSLib$ 

Note that I'm mostly a fan of changing the tests in the tests/doctests directory to normal tests. I find them easier to maintain (eg codecompletion/highlighting). It also gives more possibilities.

Doctests are fine for checking that the examples in documentation still work fine, but IMHO they should not be used for unit tests.

Note that in the example commit I dropped quite some options from tox to make the example clearer. Don't consider this a pull request. We should definitely keep using the old tests until they are replaced.

Member

johanvdw commented Dec 4, 2017

Here is a simple example:

johanvdw@8beb313

The results should look like:

johan@x1:~/git/OWSLib$ py.test
========================================== test session starts ==========================================
platform linux2 -- Python 2.7.13, pytest-3.0.6, py-1.5.2, pluggy-0.4.0 -- /usr/bin/python
cachedir: .cache
rootdir: /home/johan/git/OWSLib, inifile: tox.ini
plugins: hypothesis-3.33.0
collected 1 items 

tests/test_sos.py::TestSos::test_content PASSED

======================================= 1 passed in 0.31 seconds ========================================
johan@x1:~/git/OWSLib$ py.test -m "not remote"
========================================== test session starts ==========================================
platform linux2 -- Python 2.7.13, pytest-3.0.6, py-1.5.2, pluggy-0.4.0 -- /usr/bin/python
cachedir: .cache
rootdir: /home/johan/git/OWSLib, inifile: tox.ini
plugins: hypothesis-3.33.0
collected 1 items 

========================================== 1 tests deselected ===========================================
===================================== 1 deselected in 0.12 seconds ======================================
johan@x1:~/git/OWSLib$ py.test -m remote
========================================== test session starts ==========================================
platform linux2 -- Python 2.7.13, pytest-3.0.6, py-1.5.2, pluggy-0.4.0 -- /usr/bin/python
cachedir: .cache
rootdir: /home/johan/git/OWSLib, inifile: tox.ini
plugins: hypothesis-3.33.0
collected 1 items 

tests/test_sos.py::TestSos::test_content PASSED

======================================= 1 passed in 0.27 seconds ========================================
johan@x1:~/git/OWSLib$ 

Note that I'm mostly a fan of changing the tests in the tests/doctests directory to normal tests. I find them easier to maintain (eg codecompletion/highlighting). It also gives more possibilities.

Doctests are fine for checking that the examples in documentation still work fine, but IMHO they should not be used for unit tests.

Note that in the example commit I dropped quite some options from tox to make the example clearer. Don't consider this a pull request. We should definitely keep using the old tests until they are replaced.

@cehbrecht

This comment has been minimized.

Show comment
Hide comment
@cehbrecht

cehbrecht Dec 5, 2017

Contributor

I also prefer to use normal tests instead of doctests. We could do this for new tests and have a slow transition over the time. I manly use the WPS module ... so I could care about the WPS test suite.

To get a green light on travis we need to figure out how to add a pytest marker to those tests which are not working ... or we need to comment, remove or replace them.

Contributor

cehbrecht commented Dec 5, 2017

I also prefer to use normal tests instead of doctests. We could do this for new tests and have a slow transition over the time. I manly use the WPS module ... so I could care about the WPS test suite.

To get a green light on travis we need to figure out how to add a pytest marker to those tests which are not working ... or we need to comment, remove or replace them.

@johanvdw

This comment has been minimized.

Show comment
Hide comment
@johanvdw

johanvdw Dec 5, 2017

Member

Maybe an even better way would be using pytest.mark.skipif after a basic test:
eg

   @pytest.mark.skipif(
            urllib.urlopen("http://sensorweb.demo.52north.org/nonexisting").getcode() != 200,
            reason= "website unavailable")
    def test_existing(self):
        service = SensorObservationService('http://sensorweb.demo.52north.org/52n-sos-webapp/sos/        kvp',       version='2.0.0')
        assert "http://www.52north.org/test/offering/1" in service.contents

Which would lead to:

johan@x1:~/git/OWSLib$ py.test
========================================== test session starts ==========================================
platform linux2 -- Python 2.7.13, pytest-3.0.6, py-1.5.2, pluggy-0.4.0 -- /usr/bin/python
cachedir: .cache
rootdir: /home/johan/git/OWSLib, inifile: tox.ini
plugins: hypothesis-3.33.0
collected 2 items 

tests/test_sos.py::TestSos::test_content PASSED
tests/test_sos.py::TestSos::test_existing SKIPPED
======================================== short test summary info ========================================
SKIP [1] tests/test_sos.py:20: website unavailable
Member

johanvdw commented Dec 5, 2017

Maybe an even better way would be using pytest.mark.skipif after a basic test:
eg

   @pytest.mark.skipif(
            urllib.urlopen("http://sensorweb.demo.52north.org/nonexisting").getcode() != 200,
            reason= "website unavailable")
    def test_existing(self):
        service = SensorObservationService('http://sensorweb.demo.52north.org/52n-sos-webapp/sos/        kvp',       version='2.0.0')
        assert "http://www.52north.org/test/offering/1" in service.contents

Which would lead to:

johan@x1:~/git/OWSLib$ py.test
========================================== test session starts ==========================================
platform linux2 -- Python 2.7.13, pytest-3.0.6, py-1.5.2, pluggy-0.4.0 -- /usr/bin/python
cachedir: .cache
rootdir: /home/johan/git/OWSLib, inifile: tox.ini
plugins: hypothesis-3.33.0
collected 2 items 

tests/test_sos.py::TestSos::test_content PASSED
tests/test_sos.py::TestSos::test_existing SKIPPED
======================================== short test summary info ========================================
SKIP [1] tests/test_sos.py:20: website unavailable
@cehbrecht

This comment has been minimized.

Show comment
Hide comment
@cehbrecht

cehbrecht Dec 5, 2017

Contributor

skipif sounds good 👍

Contributor

cehbrecht commented Dec 5, 2017

skipif sounds good 👍

@cehbrecht

This comment has been minimized.

Show comment
Hide comment
@cehbrecht

cehbrecht Dec 13, 2017

Contributor

I have rewritten a doctest using pytest skipif:
https://github.com/bird-house/OWSLib/blob/fix-test-csw-ngdc/tests/test_csw_ngdc.py

Try with:

$ pytest tests/test_csw_ngdc.py

Previous doctest:
https://github.com/bird-house/OWSLib/blob/fix-test-csw-ngdc/tests/doctests/csw_ngdc.txt

Still I don't know how to skip doctests, see links:

Shall we start to rewrite the tests? Should they go into a new test subfolder?

For those (failing) tests we don't want to rewrite ... but still keep them for information ... should we comment them and print a reminder?

Contributor

cehbrecht commented Dec 13, 2017

I have rewritten a doctest using pytest skipif:
https://github.com/bird-house/OWSLib/blob/fix-test-csw-ngdc/tests/test_csw_ngdc.py

Try with:

$ pytest tests/test_csw_ngdc.py

Previous doctest:
https://github.com/bird-house/OWSLib/blob/fix-test-csw-ngdc/tests/doctests/csw_ngdc.txt

Still I don't know how to skip doctests, see links:

Shall we start to rewrite the tests? Should they go into a new test subfolder?

For those (failing) tests we don't want to rewrite ... but still keep them for information ... should we comment them and print a reminder?

@johanvdw

This comment has been minimized.

Show comment
Hide comment
@johanvdw

johanvdw Dec 14, 2017

Member

As @tomkralidis seems to be the current maintainer, it would be nice to hear his opinion.

I'm pro migrating tests to the new system. For the currently failing tests I think we should find out whether that's temporary or replace them with different servers.

Flagging online tests is useful for packaging in Debian. There it is required that a package builds without online access (which means I excluded tests in the build process). For owslib one could argue that not going online is a bit weird (as the goal of the library is using online services), so skipping just the tests for sites that are unavailable seems good as well.

Member

johanvdw commented Dec 14, 2017

As @tomkralidis seems to be the current maintainer, it would be nice to hear his opinion.

I'm pro migrating tests to the new system. For the currently failing tests I think we should find out whether that's temporary or replace them with different servers.

Flagging online tests is useful for packaging in Debian. There it is required that a package builds without online access (which means I excluded tests in the build process). For owslib one could argue that not going online is a bit weird (as the goal of the library is using online services), so skipping just the tests for sites that are unavailable seems good as well.

@tomkralidis

This comment has been minimized.

Show comment
Hide comment
@tomkralidis

tomkralidis Dec 15, 2017

Member

Hi all: thanks for the valuable analysis/assessment. +1 with moving tests to normal tests. Or course we need to ensure that all remote tests are not always all skipped (i.e. shifting the problem elsewhere). As we know the root cause of the failures are the nature of the goal of OWSLib :)

Are there any thoughts on using mocks? This would require some work to setup mocks but then we have no dependency on external resources, ever. I'm willing to sprint on this if others are.

Having said this, I'm +1 on moving to normal tests regardless. Can we move with that as a first step?

Member

tomkralidis commented Dec 15, 2017

Hi all: thanks for the valuable analysis/assessment. +1 with moving tests to normal tests. Or course we need to ensure that all remote tests are not always all skipped (i.e. shifting the problem elsewhere). As we know the root cause of the failures are the nature of the goal of OWSLib :)

Are there any thoughts on using mocks? This would require some work to setup mocks but then we have no dependency on external resources, ever. I'm willing to sprint on this if others are.

Having said this, I'm +1 on moving to normal tests regardless. Can we move with that as a first step?

@cehbrecht

This comment has been minimized.

Show comment
Hide comment
@cehbrecht

cehbrecht Dec 19, 2017

Contributor

I have made a PR #407 with some converted doctests.

I personally feel responsible for the wps module ... that is the one I'm currently using. When the doctests conversion is accepted I would convert the WPS tests as my next step. I can also look into mocking a WPS service.

Contributor

cehbrecht commented Dec 19, 2017

I have made a PR #407 with some converted doctests.

I personally feel responsible for the wps module ... that is the one I'm currently using. When the doctests conversion is accepted I would convert the WPS tests as my next step. I can also look into mocking a WPS service.

@tomkralidis

This comment has been minimized.

Show comment
Hide comment
@tomkralidis

tomkralidis Dec 19, 2017

Member

Thanks @cehbrecht. I'll commit to the remaining CSW and ISO tests.

Member

tomkralidis commented Dec 19, 2017

Thanks @cehbrecht. I'll commit to the remaining CSW and ISO tests.

@cehbrecht cehbrecht added this to the 0.17.0 milestone Mar 20, 2018

@cehbrecht

This comment has been minimized.

Show comment
Hide comment
@cehbrecht

cehbrecht Mar 22, 2018

Contributor

To get a green light on travis again I have moved broken tests to the tests/_broken folder which is skipped by the test-run on travis (#424). One can still test them locally and we can fix them if they are valuable.

We have also started to convert tests to "normal" tests and make use of pytest.mark.skipif to skip a test if the service is currently not available.

There is also pytest flaky extension to skip test which sometimes do not work. Might help us on the transition.

There is more work waiting to get the test-suite in good shape again.

Contributor

cehbrecht commented Mar 22, 2018

To get a green light on travis again I have moved broken tests to the tests/_broken folder which is skipped by the test-run on travis (#424). One can still test them locally and we can fix them if they are valuable.

We have also started to convert tests to "normal" tests and make use of pytest.mark.skipif to skip a test if the service is currently not available.

There is also pytest flaky extension to skip test which sometimes do not work. Might help us on the transition.

There is more work waiting to get the test-suite in good shape again.

@cehbrecht

This comment has been minimized.

Show comment
Hide comment
@cehbrecht

cehbrecht May 8, 2018

Contributor

I close this one for now. Tests are working most of the time. There is more work to do but this can happen continuously.

Contributor

cehbrecht commented May 8, 2018

I close this one for now. Tests are working most of the time. There is more work to do but this can happen continuously.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment