Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add regression tests #19

Closed
7 tasks
dmey opened this issue Jul 16, 2019 · 10 comments
Closed
7 tasks

Add regression tests #19

dmey opened this issue Jul 16, 2019 · 10 comments
Assignees
Milestone

Comments

@dmey
Copy link
Contributor

dmey commented Jul 16, 2019

At the moment we do not carry out any sort of regression tests in uDALES. All tests are simply build and runtime tests. I would suggest using this issue as a working template for any tests we need to add. Please keep editing this section and add comments when you want a test to be implemented.

  • Divergence test – random velocity field and check for div u = 0.
    TODO: add description

  • Dry ABL run which is well documented (consistency with DALES)
    TODO: add description

  • Test for IBM
    TODO: add description

  • Test for driver simulation
    TODO: add description

  • Test uses EB
    TODO: add description

  • Scalar dispersion test
    TODO: add description

  • Chemistry test
    TODO: add description

@tomgrylls
Copy link
Contributor

From #64, I found that the regression tests fail if the input files are changed in a pull request. This leads to the CI failing if e.g. the namoptions structure is changed or if parameters are removed or any other inp file structures are altered. It is okay if this failure is then up to the users discretion however this would be avoided of the master branch executable is run/ also run on the master branch test simulation as opposed to the incoming branch's test simulation. See comments #64 (comment) and #64 (comment) for discussion - not necessary but potentially something to discuss.

@dmey
Copy link
Contributor Author

dmey commented Nov 21, 2020

@bss116 and @samoliverowens at the moment we have some tests that we run on the CI using the following small cases at https://github.com/uDALES/u-dales/tree/master/tests/cases. Given that we have the examples set up now, I think it would make more sense to use/reuse (some of) cases from the examples folders. If so, could you suggest which ones and if we need to modify length etc...

@dmey dmey modified the milestones: Future, 0.1.0 Nov 21, 2020
@bss116
Copy link
Contributor

bss116 commented Nov 21, 2020

Yes I fully agree. Of the test cases, 001 uses the energy balance --> 201 as new case from the examples. 101 is a driver simulation --> 501 and 502 of the examples, but they are twice the size. 002 seems similar to 001 but without calculating the energy balance, a much smaller domain but longer simulation time.
So 201 can probably replace 001 without much changes. For 501+502 we might want to consider changing the resolution for the test case. I would also add the example 001 with no buildings. We should also keep 002 of the test cases, but give it a new name so it's not confusing with the examples. That's my suggestion, let me know what you think.

@dmey
Copy link
Contributor Author

dmey commented Nov 21, 2020

I remember that we decided to keep the domains etc small so that the tests would run in a reasonable amount of time. We should try to spend less than 5 minutes of runtime per job (and cumulatively for all tests). Would it be enough to just change the end time so that we run a couple of time steps and we can keep all the other files the same. As long as the switches for the job are active during the simulation windows (start -- end) it would give us enough information to then compare the two output files (as long as we write all those). And we can keep the cases as they are in the examples and just patch the end time in the namelist. How does this sound?

@bss116
Copy link
Contributor

bss116 commented Nov 21, 2020

This is fine for example 001. The runtime there is only 11 seconds but we can decrease to 5 as in the current tests. For the energy balance and driver simulations (201 and 501+502) we have to check how long it takes to run them, they are both 128 grid cells in each direction, instead of the 64 in the current tests. I am currently rerunning the driver files for 501 on my mac, and it does take a long time...

@dmey
Copy link
Contributor Author

dmey commented Nov 21, 2020

For 001 I noticed that we had sca1 in the output fields -- are you OK to add them back in the examples? If 002 is a superset of 001 I would just run 001 only. About 501+502 the current runtime is 101 so let me know how long it takes and we can see to maybe decrease to 10 or 5? BTW which compiler (version) are you using?

@bss116
Copy link
Contributor

bss116 commented Nov 21, 2020

so this already confusing because I talked about examples/001 and not tests/cases/001 😄 examples/001 has no scalars. examples/101 and examples/102 have scalars, so we should probably add one of these to the tests too.

tests/cases/002 has similar parameters to tests/cases/001 (except no energy balance), but a different domain size, so all the input files are changed. I would keep tests/cases/002 as it is, except to name it differently to avoid confusion with examples/002.

Ok I'll check how long 10s take for examples/501+502. I might reduce the time step as interpolation there seem to be causing problems sometimes.

-- The Fortran compiler identification is GNU 9.3.0

@dmey
Copy link
Contributor Author

dmey commented Nov 21, 2020

Cool and I my original idea was to just delete all the cases in tests folder if similar tests can be run using the cases in the examples folder instead. The main reason for doing this is just maintainability and avoid issues if something get changed in the code as it may be the case for the driver sim. So if we can just use (some of) the examples and change the time step would be preferable I think.

@bss116
Copy link
Contributor

bss116 commented Nov 21, 2020

Yes let's give that a try. Here is what I would do: remove everything from tests except 002. Run the simulations 001, 102, 201, 501 and 502 (with 501 outputs!) from the examples with the following changes:

001:
runtime = 5
tfielddump = 4

102:
lwarmstart = .false.
runtime = 5
tfielddump = 4
tstatsdump = 4
lreadscal = .false.

201:
runtime = 5
tfielddump = 4
tstatsdump = 4

501:
runtime = 3
tstatsdump = 2
tsample = 1
driverstore = 21

502:
runtime = 2.5
tfielddump = 2
tstatsdump = 2
driverstore = 21

There might be some more changes required that I missed. @dmey can you look into this and see if that works? You could also start with 001 and 201 unchanged, as they are by default only run for 11 s.

@dmey
Copy link
Contributor Author

dmey commented Nov 21, 2020

Cool -- will do

@dmey dmey self-assigned this Nov 21, 2020
@dmey dmey closed this as completed in 7da2b59 Nov 25, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants