Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Include test files in coverage report #6902

Closed
Kojoley opened this issue Aug 4, 2016 · 6 comments
Closed

Include test files in coverage report #6902

Kojoley opened this issue Aug 4, 2016 · 6 comments
Milestone

Comments

@Kojoley
Copy link
Member

Kojoley commented Aug 4, 2016

Of course I can run coverage for tests locally, but I think it should be accessible on coveralls. You cannot blindly rely on a test report until you are sure that they are actually working, otherwise it could give you a false sense of security.

I have faced a problem where I cannot be sure that pytest runs all the tests. Moreover I have got a test failures with pytest on tests that nose does not perform (and I do not why then they are not deleted). Here is the list:

  • test_sankey not deprecated, runs without issues
  • test_skew not deprecated, has outdated baseline images or tolerance problems (..FF.F: RMS 0.014, 0.264, 0.065)
  • test_ttconv not deprecated, runs without issues
  • test_usetex not deprecated, has outdated baseline images or tolerance problems (.F: RMS 0.261)

What benefits can we have by including test files coverage in the report? I collected dark spots in the test suite:

Important (test not running, uncovered cases):

Less important (partial coverage):

Not important (requires tests for the tests, and other cases):

Someone can say that we need tests for the tests to have full coverage -- I would say no, some special places can be marked with #pragma: no cover, but other can be fixed to have full coverage.

Currently this can be done with nose-cov plugin and after migrating to pytest we will have exactly same coverage reports with pytest-cov.

@jenshnielsen
Copy link
Member

Thanks for investigating this. I think the fundamental problem is that we have an opt in default_test_modules in matplotlib/__init__.py and new test modules have been added without adding them to that list. I suggest that we get rid of that mechanism. If tests need to be skipped they should be explicitly marked as such. This covers your first category.

I am not convinced that adding the tests to the default coverage is the right solution. To me it would bias the coverage towards an artificially high number which is not really representative of the real coverage of our code.

@Kojoley
Copy link
Member Author

Kojoley commented Aug 7, 2016

I suggest that we get rid of that mechanism.

I am for this too.

Currently for pytest I have developed two filters:

  • --collect-filter=whitelist (default) uses the default_test_modules list
  • --collect-filter=blacklist uses the list of disabled tests (actually this can be done with def setup_module: skip()), but I have done it to support backward comparability with nose
  • --collect-filter=none do not use any filters

But I will drop them with pleasure.

it would bias the coverage towards an artificially high number which is not really representative of the real coverage of our code

Currently code coverage percentage is already faked (for example there are only 3 backends in coverage report, while other do not included, because they have zero coverage). I do not think anyone measures matplotlib by coverage number, but test code is a code too, and it is better to have overestimated score than tests which do not work.

@jenshnielsen
Copy link
Member

I definitely support getting rid of the whitelisting as a option or at least not making it the default. I think our experience have shown that its very error prone.

The backends are definitely an issue for testing. That is the main driver behind deprecating the WX non AGG backend and the GDK backend. We do most definitely need more coverage for the Cairo, ps, pgf backends and the various interactive aspects of GUI toolkits.

That being said the backends are not excluded from coverage it just happens that the python code that implements most backends is a small fraction of the total code. You cannot imply that we need to run all examples with all backends to get full coverage since most of the tests are testing non backend specific code.

@Kojoley
Copy link
Member Author

Kojoley commented Aug 7, 2016

I cannot say that including tests in coverage report is a definitely good solution, but until there is no other ways (or I do not know about them) to prevent issues such like #6839 (and others mentioned above) I am for it.

@story645
Copy link
Member

story645 commented Aug 23, 2016

So the way coverage is currently set up, anything using py.test tanks miserably: https://coveralls.io/jobs/17551007 (eta: it's an issue for #6889)

@QuLogic
Copy link
Member

QuLogic commented Feb 5, 2017

Fixed by #8003.

@QuLogic QuLogic closed this as completed Feb 5, 2017
@QuLogic QuLogic added this to the 2.1 (next point release) milestone Feb 5, 2017
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants