-
Notifications
You must be signed in to change notification settings - Fork 11
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Longer tests, logging, warnings #54
Conversation
Can this be automated somehow? Like can there be a private function to go and change consistency tests to match the new reality if we make a breaking change we want to pass? |
Yeah, that's an interesting idea. It could probably be automated somehow. Automating would require editing the python code via a script though, so would probably need to be a bash script. Something with Automating that would certainly make it easier to write a ton more tests though. |
hi @elbeejay, is this what you had in mind here? Or were you thinking like a long, serious, model run? The latter could be configured as another "job" on travis, to call some python script that runs the model for a few hundred timesteps (jobs time out at some point...). If you mean the latter, then this should be disconnected from #51 before merging, or it will accidentally close the issue. Either way, I think this can be merged now if you are good with it 👍 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
hi @elbeejay, is this what you had in mind here? Or were you thinking like a long, serious, model run?
The latter could be configured as another "job" on travis, to call some python script that runs the model for a few hundred timesteps (jobs time out at some point...).
I like the effort and think this is definitely a step forward for the unit testing. I just can't wrap my head around how we would go about testing the ability of the model (as we continue to alter it) to transition from the initial jet to a reasonable channel network. This (at least to me) is one of the critical things DeltaRCM has to do and there is no obvious or intuitive way to test that without a model run of many timesteps (the point about travis timing out is well taken).
I don't know how we get around it, but that is my concern because if we break the model and it no longer forms a channel network then we have to revert to an older version of the code with no real knowledge of where the 'breaking' change was made.
If you mean the latter, then this should be disconnected from #51 before merging, or it will accidentally close the issue.
If you don't mind leaving #51 open I think we could improve upon it. Maybe my initializing tests from various pre-developed topographies will help get around running super long tests, not sure just an early thought.
Either way, I think this can be merged now if you are good with it 👍
Yes I agree, these are nice changes that continue to move us forward in the right direction 😄
This test throws an error by trying to index cell 1800 of a 30x60 array. | ||
This exceeds the limit of the array. I suspect this is a bug with the | ||
unravel in shared_tools. | ||
|
||
The xfail should be removed when the bug is fixed. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for flagging this test as well as leaving the segfault issues in the below commented out tests. Will be helpful when debugging to see if these get fixed.
def test_long_multi_validation(tmp_path): | ||
# IndexError on corner. | ||
|
||
file_name = 'user_parameters.yaml' | ||
p, f = utilities.create_temporary_file(tmp_path, file_name) | ||
utilities.write_parameter_to_file(f, 'seed', 42) | ||
utilities.write_parameter_to_file(f, 'Length', 600.) | ||
utilities.write_parameter_to_file(f, 'Width', 600.) | ||
utilities.write_parameter_to_file(f, 'dx', 5) | ||
utilities.write_parameter_to_file(f, 'Np_water', 10) | ||
utilities.write_parameter_to_file(f, 'Np_sed', 10) | ||
utilities.write_parameter_to_file(f, 'f_bedload', 0.05) | ||
f.close() | ||
delta = DeltaModel(input_file=p) | ||
|
||
for _ in range(0, 3): | ||
delta.update() | ||
|
||
# slice is: test_DeltaModel.eta[:5, 62] | ||
# print(delta.eta[:5, 62]) | ||
|
||
_exp1 = np.array([-4.971009, -3.722004, -4.973, -3.7240038, -3.7250037]) | ||
assert np.all(delta.eta[:5, 62] == pytest.approx(_exp1)) | ||
|
||
for _ in range(0, 10): | ||
delta.update() | ||
|
||
# slice is: test_DeltaModel.eta[:5, 4] | ||
print(delta.eta[:5, 62]) | ||
|
||
_exp2 = np.array([-4.971052, -2.0813923, -2.0824013, -4.6614914, -1.5570664]) | ||
assert np.all(delta.eta[:5, 62] == pytest.approx(_exp2)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is of course longer than a single step test. But I guess my concern is about reproduction of one of the aspects that makes the model unique - the ability to form a distributary channel network. So while this is definitely a step ahead and an improvement to our previous checks, I think we could do better.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yeah, I agree. I'm going to unlink the issue so it stays open.
Longer tests, logging, warnings
This PR implements some additional consistency testing. I was reluctant to make many tests, as was initially suggested in discussion and #51, because if there are many tests, and you make a known breaking change, it's quite a bit of homework to go back and make all those tests passing again. I added two more tests, such that on a breaking change, the dev would have to go back and change 4 lines to match the new bed elevations.
I also did work on the i/o file logger, and warnings changes and testing. Previously the logger wrote to a file in the cwd, but now the log is written to the location defined by
out_dir
in the.yml
config file. Now, every time something happens (just a few markers through the code) the action is written to the log, regardless of the verbosity (only one item is written to log per timestep, so this doesn't appreciably slow things down). Theverbose
flag controls whether these messages are also printed to the console. These actions could be wrapped into a function to simplify the code (I'll open an issue if you all like these changes).This PR
will probably closerelates to but does not complete #51, and addresses most of #45. Between this PR and #52, most of #35 would be implemented too (but still needs to be completely addressed once we rewerite to use hdf in #13).Will rebase the code once #52 is merged.