-
Notifications
You must be signed in to change notification settings - Fork 9
Need to grow the list of travis tests #13
Comments
what is the purpose of the travis tests? I am assuming that we still want any contributor to run testreport locally before making a pull request, so for the test there doesn't need to be too much, does it? For changes to the Fortran code I would suggest
Does travis come back with a negative result, when the one of the ./testreport tests fails because of lack of precision, i.e. when I actually change the result? Is it possible to configure travis so that is runs testreport only, when change to the code have been made (e.g. directories model, eesupp, pkg, tools, verification), and run tests of the manual separately? Currently, when I make a pull request for changes to the manual (directory docs), ./testreport is run, which doesn't make much sense, but the ReST code is not tested. Maybe we can have simple tests for the manual that
|
@mjlosch that sounds like a good list to me.
|
Note to self - For reference some useful python assert example in here. https://github.com/edoddridge/aronnax/blob/master/test/output_preservation_test.py Still to figure out is tests against Intel, PGI, TAF and on various cluster and HPC environments. These are currently used to ensure portability and check for unintended consequences of new code. We maybe could do webhooks for these that are triggered when a PR comes in or is updated. That looks to be possible, not sure how to communicate pass/fail back yet? Note - curl can be used to get PRs and PR content content e.g.
etc... |
this https://enterprise.travis-ci.com/Travis.CI.Enterprise.Information.Sheet.pdf may be a way to do builds against licensed software and custom cluster environments within Travis framework. |
@jm-c and @jahn can we start to make a list here of what we view as essential travis tests (experiments and OS variants) so that we can merge more confidently.
It would be nice if things like @mjlosch #11 were eventually sufficiently tested by travis that the chances of them breaking something were relatively low. Current test coverage is certainly not enough! I don't think we can get to perfect coverage, but more than the place holder we have would be good.
The text was updated successfully, but these errors were encountered: