You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Dear all and especially the developers of GeoClaw,
I have been using GeoClaw for tsunami simulation and stochastic modeling in Cascadia. I'm simulating the tsunami inundation in the region [-129,120] x [40,50].
However, I have noticed an artifact that the gauge test on dry cells usually gives large squares. GeoClaw is fine in gauge test on wet cells (in finite elements). But when I output the time series of water on dry cells there's square-ish function going up and down. I assume that has something related to AMR (adaptive mesh refinement), because the cell size is changing over time. However, I wish to take a closer look at if this will be a problem for time series on wet cells as well (I assume not much -- since the GeoClaw is validated on Tohoku).
I have attached the gauge test on a wet cell and on a dry cell. Please if you know which part of the code to look into, do let me know. I wish to know why on the dry land the elevation is square (AMR?). I wish to know which part of the GeoClaw code outputs the gauges. :)
The first one was output on a wet cell (a location that is flooded by tsunami). The second one was output on a dry cell (a high land that is not flooded by tsunami). Our work at UCL is currently only interested in the maximum water height (over time) at certain gauge locations. Although I know the maximum water height can be output and obtained from the monitoring grid (not from the time series on gauges), although we are only interested in wet cells that are flooded (which means the dry cells are not important to our project at the moment), although we know GeoClaw is validated in large subduction zones like Tohoku EQ, but we would still like to know why the gauge tests on not flooded gauges exhibit square behaviors.
Dear all and especially the developers of GeoClaw,
I have been using GeoClaw for tsunami simulation and stochastic modeling in Cascadia. I'm simulating the tsunami inundation in the region [-129,120] x [40,50].
However, I have noticed an artifact that the gauge test on dry cells usually gives large squares. GeoClaw is fine in gauge test on wet cells (in finite elements). But when I output the time series of water on dry cells there's square-ish function going up and down. I assume that has something related to AMR (adaptive mesh refinement), because the cell size is changing over time. However, I wish to take a closer look at if this will be a problem for time series on wet cells as well (I assume not much -- since the GeoClaw is validated on Tohoku).
I have attached the gauge test on a wet cell and on a dry cell. Please if you know which part of the code to look into, do let me know. I wish to know why on the dry land the elevation is square (AMR?). I wish to know which part of the GeoClaw code outputs the gauges. :)
The first one was output on a wet cell (a location that is flooded by tsunami). The second one was output on a dry cell (a high land that is not flooded by tsunami). Our work at UCL is currently only interested in the maximum water height (over time) at certain gauge locations. Although I know the maximum water height can be output and obtained from the monitoring grid (not from the time series on gauges), although we are only interested in wet cells that are flooded (which means the dry cells are not important to our project at the moment), although we know GeoClaw is validated in large subduction zones like Tohoku EQ, but we would still like to know why the gauge tests on not flooded gauges exhibit square behaviors.
I have posted the time series plots here
https://groups.google.com/forum/#!topic/claw-dev/J9rmSD54_XY
Please email mengdi.zheng@ucl.ac.uk if you have the answer .
Thanks,
Summer
The text was updated successfully, but these errors were encountered: