Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Further improve test coverage #71

Open
6 tasks
mperrin opened this issue Feb 12, 2015 · 6 comments
Open
6 tasks

Further improve test coverage #71

mperrin opened this issue Feb 12, 2015 · 6 comments
Labels
Milestone

Comments

@mperrin
Copy link
Owner

mperrin commented Feb 12, 2015

This is just a catch all issue for places that need more coverage.

  • equivalence of inverse MFT with inverse FFT
  • Instrument class has pretty sparse coverage so far
  • None of the display code is tested at all; needs setup for headless testing without on screen drawing.
  • Bandlimited coron needs test coverage
  • MultiHexagonAperture needs test coverage
  • Zernike utility functions need test coverage
douglase pushed a commit to douglase/poppy that referenced this issue May 1, 2015
Include astropy_helpers in the affilated package's tarball.
@mperrin
Copy link
Owner Author

mperrin commented Aug 18, 2015

@mperrin mperrin added this to the 0.5 milestone Nov 18, 2015
@mperrin mperrin modified the milestones: 0.5.1, 0.5 Apr 21, 2016
@mperrin
Copy link
Owner Author

mperrin commented Sep 26, 2016

Now we have coveralls working again so it's somewhat more user friendly to address this.
https://coveralls.io/github/mperrin/poppy

The big offender on lacking coverage is the display code, which doesn't get exercised in the existing test suite basically at all. @josePhoenix do you have any experience in best practices for writing tests on matplotlib code? Can you think of anyone here we could chat with?

I'm not even sure what level of thoroughness we should target. It might be enough to just ensure all the plotting and display code runs without crashing on various test case inputs, without putting much/any effort into pixel-level evaluation of correctness of the outputs. Limited resources and this isn't our top priority but I'd like to at least take a first order pass at not totally neglecting about a quarter of the overall codebase.

@josePhoenix
Copy link
Collaborator

This gets back at what I brought up a while ago re: using the object-oriented API instead of PyPlot. Trying to test code that uses PyPlot means you have to understand how to query the PyPlot state machine... which I don't relish the thought of. Of course, pixel-wise comparisons of output plots would also work, and that is in fact how matplotlib tests itself. The PNG backend uses the same Agg library as the display ones, so we can be confident that correctly producing the PNG means users will see the Right Thing.

@josePhoenix
Copy link
Collaborator

@mperrin
Copy link
Owner Author

mperrin commented Sep 26, 2016

I like the idea from that reference of putting together a few end-to-end tests, comparing the results to static pre-generated PNGs, and using a thresholded histogram of the results. I'm totally willing to buy their point that such an approach is more efficient than trying to write lots of little individual unit tests.

@josePhoenix
Copy link
Collaborator

There are some utilities in matplotlib.testing.compare that we could probably repurpose. They use nosetests instead of pytest, so I don't think we can grab their decorator as-is.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants