-
Notifications
You must be signed in to change notification settings - Fork 18
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add regression tests #19
Comments
From #64, I found that the regression tests fail if the input files are changed in a pull request. This leads to the CI failing if e.g. the namoptions structure is changed or if parameters are removed or any other inp file structures are altered. It is okay if this failure is then up to the users discretion however this would be avoided of the master branch executable is run/ also run on the master branch test simulation as opposed to the incoming branch's test simulation. See comments #64 (comment) and #64 (comment) for discussion - not necessary but potentially something to discuss. |
@bss116 and @samoliverowens at the moment we have some tests that we run on the CI using the following small cases at https://github.com/uDALES/u-dales/tree/master/tests/cases. Given that we have the examples set up now, I think it would make more sense to use/reuse (some of) cases from the examples folders. If so, could you suggest which ones and if we need to modify length etc... |
Yes I fully agree. Of the test cases, 001 uses the energy balance --> 201 as new case from the examples. 101 is a driver simulation --> 501 and 502 of the examples, but they are twice the size. 002 seems similar to 001 but without calculating the energy balance, a much smaller domain but longer simulation time. |
I remember that we decided to keep the domains etc small so that the tests would run in a reasonable amount of time. We should try to spend less than 5 minutes of runtime per job (and cumulatively for all tests). Would it be enough to just change the end time so that we run a couple of time steps and we can keep all the other files the same. As long as the switches for the job are active during the simulation windows (start -- end) it would give us enough information to then compare the two output files (as long as we write all those). And we can keep the cases as they are in the examples and just patch the end time in the namelist. How does this sound? |
This is fine for example 001. The runtime there is only 11 seconds but we can decrease to 5 as in the current tests. For the energy balance and driver simulations (201 and 501+502) we have to check how long it takes to run them, they are both 128 grid cells in each direction, instead of the 64 in the current tests. I am currently rerunning the driver files for 501 on my mac, and it does take a long time... |
For 001 I noticed that we had |
so this already confusing because I talked about examples/001 and not tests/cases/001 😄 examples/001 has no scalars. examples/101 and examples/102 have scalars, so we should probably add one of these to the tests too. tests/cases/002 has similar parameters to tests/cases/001 (except no energy balance), but a different domain size, so all the input files are changed. I would keep tests/cases/002 as it is, except to name it differently to avoid confusion with examples/002. Ok I'll check how long 10s take for examples/501+502. I might reduce the time step as interpolation there seem to be causing problems sometimes. -- The Fortran compiler identification is GNU 9.3.0 |
Cool and I my original idea was to just delete all the cases in |
Yes let's give that a try. Here is what I would do: remove everything from 001: 102: 201: 501: 502: There might be some more changes required that I missed. @dmey can you look into this and see if that works? You could also start with 001 and 201 unchanged, as they are by default only run for 11 s. |
Cool -- will do |
At the moment we do not carry out any sort of regression tests in uDALES. All tests are simply build and runtime tests. I would suggest using this issue as a working template for any tests we need to add. Please keep editing this section and add comments when you want a test to be implemented.
Divergence test – random velocity field and check for div u = 0.
TODO: add description
Dry ABL run which is well documented (consistency with DALES)
TODO: add description
Test for IBM
TODO: add description
Test for driver simulation
TODO: add description
Test uses EB
TODO: add description
Scalar dispersion test
TODO: add description
Chemistry test
TODO: add description
The text was updated successfully, but these errors were encountered: