Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

End-to-end test module #61

Merged
merged 3 commits into from
Feb 4, 2016
Merged

Conversation

spencerkclark
Copy link
Collaborator

This is a start of a minimal end-to-end testing module (as discussed in #42). Currently I've just included a test of one project, one model, one run, and one model-native variable. The structure is based off comments in pull request 43 and further developments in spencerkclark/aospy-db#2.

There is ample room to make this more comprehensive, but I want to get the basic structure down first. (This is meant to replace pull request 43; rather than modify the existing one, I thought it might be cleaner to start fresh).

Please let me know if you have any comments or suggestions. Thanks!

@spencerahill spencerahill mentioned this pull request Feb 4, 2016
@spencerahill
Copy link
Owner

From first glance this looks great. I'll take a closer look soon. Thanks!


if __name__ == '__main__':
suite = unittest.TestLoader().loadTestsFromTestCase(TestAospy)
unittest.TextTestRunner(verbosity=2).run(suite)
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What's the motivation behind these calls? I think the idiom for the main block of test modules is just unittest.main() (see e.g. test_utils.py).

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No real motivation here; this is more or less just carried over from me experimenting with different levels of verbose output from the unittest module. I have no issues making this more consistent with the other test modules already defined in aospy.

import runs


am2 = Model(
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here and elsewhere that has GFDL-specific files, we might want to put in some try/except guards such that these don't cause the overall aospy testing suite to crash from non-GFDL machines (thinking about this led me to open #62).

For the tests themselves, we can use unittest's skip decorators.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let me know if 973d868 addresses this to your liking. It's not as general as we might like (since the test doesn't just look for the grid files, it also looks for the olr files), but I think it is sufficient to determine whether a test is only possible to be run in the GFDL environment.

@spencerahill
Copy link
Owner

Other than my in-line comments, this looks great and works for me from an ssh. Out of curiosity I cloned it to my laptop also, and the result is just a failure of test_am2_annual_mean with RuntimeError: No such file or directory. So I think all we need is a skipIf decorator, with the boolean being a check if the needed file exists -- probably os.path.isfile.

This PR, combined with #63, will give me so much more piece of mind moving forward.

spencerahill pushed a commit that referenced this pull request Feb 4, 2016
@spencerahill spencerahill merged commit 97e1c15 into spencerahill:develop Feb 4, 2016
@spencerahill
Copy link
Owner

Yes, this works perfectly -- skips that test on my local machine, and test runs and passes at GFDL.

Thanks a lot!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants