-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Adding Solverstatisticscheck #36
base: develop
Are you sure you want to change the base?
Conversation
b5464a8
to
f27933c
Compare
|
Please have a look at GEOS-DEV/GEOS#2680 |
For the curve_check that's what happens. Basically the solution is interpolated. The restarts will fail if timestepping is different (and they should). I feel that interpolating for the number of iterations does not make that much sense. It won't give us that much more information than comparing the total number of iterations or the average number per time-step. We could decide to only check that on all machine apart from the one used to create the baseline where we require a 1 to 1 match. |
can someone review this PR so that we can move forward with it? |
I don't think I will add checks for sequential strategies for now. The goal is to add some coverage for MGR strategies which are only used for fully coupled. |
Conceptually this is a good start. I worry that we might want to use a different term than Generically, what we're doing with this is checking for any divergence from a baseline that is not strictly about problem correctness. Also, longer-term I think we would want to make the tool more generic to extract arbitrary metrics using regex's (or just python functions which consume strings and return metric info -- if we don't want to be strongly tied to regexes) to construct plots we do curve-checking against. |
How about verification? |
Classically yeah, it would be validation == correctness of the solution, verification == correctness of execution |
73a6165
to
7a434bf
Compare
Maybe, since at the moment that's the only thing that it is doing, we can simply call it
To be honest, I really hope that eventually we can output this data through time-history and completely avoid parsing the log. I just did it for now coz it felt like we needed a quick solution. |
Good point! |
errors: | ||
""" | ||
# Define regular expressions | ||
cycle_pattern = r"\d+\s*:\s*Time: [\d.e+-]+ s, dt: [\d.e+-]+ s, Cycle: (\d+)" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
With the new changes in log timestamp format, this would miss anything that is in minutes, years, etc. Maybe switch s,
to \s*,
?
|
||
# Create an HDF5 file for storing the data | ||
output_fileName = os.path.join(os.path.dirname(fname), 'extracted_solverStat_data.h5') | ||
with h5py.File(output_fileName, 'w') as hdf5_file: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since you are already have hdf5_wrapper as a dependency, you could switch this to:
with hdf5_wrapper.hdf5_wrapper(output_fileName, 'w') as hdf5_file:
...
And then write to the object as if it were a simple python dictionary:
hdf5_file['some_key'] = {'some': ['value']}
|
||
def solver_statistics_check_parser(): | ||
""" | ||
Build the curve check parser |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Build the solver statistics parser
Recently just saw this:
in |
In this PR I have added a new check called performancecheck (feel free to suggest a different name).
To summarize:
I could add some plots of the number of iterations if we think they could be useful.
I still see 2 main issues:
@castelletto1, @francoishamon @victorapm we should also list the MGR strategies we want to test and pick a case for each one of them so that I can start adding tests.
For now, I have added: