Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Have a standardized way of testing correctness #23

Closed
astrofrog opened this issue Mar 28, 2016 · 3 comments
Closed

Have a standardized way of testing correctness #23

astrofrog opened this issue Mar 28, 2016 · 3 comments

Comments

@astrofrog
Copy link
Member

For testing correctness, the pattern is always going to be the same - create a region with some arguments, then check that calling certain fixed methods or properties return a given result. We could therefore simplify the boilerplate by simply making it possible to have a list of regions to test and provide reference results. One way to do this would be to have the regions stored in a serialized way, and have a mapping from serialized file to reference results. For instance, we could have a table with the following format (just an idea to start the discussion):

region_file,    area,    to_mask,         to_pixel
region1.xml,     5.4,  mask1.fits, region1_pix.xml
region2.xml,     2.3,  mask2.fits, region2_pix.xml

We then have a parametrized test that loops over each entry and gives a pass/fail status. This would save writing a lot of code and make it easier to have many tests for corner cases, etc. It would also be easy to inspect things like masks.

Ideally we would use a serialization format simpler than XML for this so that these can be written by hand, not generated.

@astrofrog
Copy link
Member Author

@cdeil @joleroi @keflavich - I've now made a small package called pytest-fits (a spin-off of pytest-mpl) which can be used to easily deal with checking that the array or HDU output of functions is correct, and makes it easy to generate reference files. I developed it for the reproject package but I think it can be useful here too. Basically, with this plugin, you can write a test like:

@pytest.mark.fits_compare
def test_succeeds():
    region = ...
    return region.to_mask(...)

then you can run the tests the first time with:

py.test --fits --fits-generate-path=tmp

and the tmp directory will contain the reference outputs. You can check these visually, and if they are ok, then they can be moved to a baseline sub-directory in the tests folder. Then, just run the tests with:

py.test --fits

to make sure the output is compared to the reference. Note that you can also add --fits to setup.cfg so that this option gets included automatically - see here for how the reproject package does it.

@astrofrog
Copy link
Member Author

Note that if we do this, we don't need to worry about the idea I raised above about the file with all the tests - instead we can just write parametrized tests as usual.

@astrofrog
Copy link
Member Author

Done in #71

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants