-
Notifications
You must be signed in to change notification settings - Fork 313
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Check variables sent from CTSM to WRF #911
Comments
In the process of updating this table with additional info... @weiwangncar I pinged you for your input below. If you search by your handle (weiwangncar) you should find all my questions. Pls let me know if anything is unclear. Variables listed from Bill recommends that I focus on lnd --> atm variables (last two groups of variables below). with prefix atm_to_cpl_Faxa_, cpl_lnd_atm_Faxa_ with prefix atm_to_cpl_Sa_, cpl_to_lnd_atm_Sa_ with prefix atm_to_cpl_, cpl_to_atm_, lnd_to_cpl_atm_, lnd_to_cpl_rof_, cpl_to_lnd_atm_ with prefix lnd_to_cpl_rof_Flrl_ with prefix cpl_to_atm_Fall_, lnd_to_cpl_atm_Fall_ OK flxdst[1-4]: ctsm DSTFLXT vs lilac sum(flxdst[1:4]) vs wrf ?? Not passed to WRF. Placeholder. with prefix cpl_to_atm_Sl_, lnd_to_cpl_atm_Sl_ OK z0m: ctsm vs wrf z0 Comes with this comment in the code: OK qref: ctsm Q2M = lilac vs wrf Q2 Comes with this comment in the code: |
Our 4-month WRF-CTSM simulation stopped on May 29th, so almost two months in, with this error in rsl.out.0015 :
Would you like me to investigate further, or is this something one of you should look into? If you need access to this run's output, pls look here: Meanwhile, I just submitted the 4-month WRF-NOAH simulation to see if it stops with the same issue. |
@slevisconsulting I don't feel that I'd be able to track this down easily. So I'd like to let you and/or @barlage look into it. Thanks. |
@barlage @negin513 The WRF-CTSM simulation failed with last file written: |
I modified the error checking in @barlage let me know if you have any time-saving insights. |
Ugh. The lack of restart capability is really going to kill us on
debugging. We may need to discuss this. Sam, might be worth looking at
model output in the days/hours leading up to the crash to see if it looks
like there was some sort of instability growing or something like that.
…On Wed, Apr 1, 2020 at 2:38 PM Samuel Levis ***@***.***> wrote:
I modified the error checking in lnd_import_export to tell me the
specific variable name and value that triggers the error and started the
WRF-CTSM simulation from the beginning.
@barlage <https://github.com/barlage> let me know if you have any
time-saving insights.
@negin513 <https://github.com/negin513> feel free to work with the output
from the WRF-NOAH simulation, while we debug the WRF-CTSM simulation.
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
<#911 (comment)>, or
unsubscribe
<https://github.com/notifications/unsubscribe-auth/AFABYVB4QJQT4NK5VYPF7HTRKOQ23ANCNFSM4KPMUCNA>
.
|
My first check would be the albedo, is it possible that it is going above 1? After the radiation scheme runs, I believe this line calculates the SWDOWN field that's going into the LSM: |
@barlage I'm trying the following change in module_sf_ctsm.F:
If the simulation doesn't stop, then the error was due to I'm open to comments. |
FYI, there are currently a few pieces needed in order to get restarts to work within WRF. The pieces I know of are:
|
@barlage the new simulation stopped with last file written Question 1: Do you expect WRF simulations to differ when they start from the same initial conditions? (Sorry if you told me this before and I forgot...) |
@slevisconsulting
and in the radiation option you are using, they are coming from note that we are not currently sending the 4-component albedo back to WRF, but we are using the 4-component radiation from WRF, which uses the mean albedo calculated in |
UPDATING THE CONTENTS OF THIS POST TO REFLECT LATEST RESULTS @barlage @negin513 @dlawrenncar Back to the WRF-CTSM simulation that stops on 5/29 and that I have now reproduced. I have confirmed that the model stops with this error: |
Before we go down a new rabbit hole, please disregard the last post. After using (Sorry.) |
Unfortunately For reason (2) I will resort to including If I find that the error comes from subtracting a Comments welcome. |
@slevisconsulting My main concern here is this solution will likely find resistance from the WRF community unless we can prove that it happens with other land models. It is not clear that this is a bug in RRTMG and that's the implication with your suggestion. We should also print albedo values here to see if they are reasonable (or if something looks out of bounds) when the radiation goes negative. If you are doing simulations where you output every timestep, I suggest it might be worth running a simulation that maximizes radiation calls, i.e., change the model timestep to one minute and set the radiation scheme to activate every minute (radt = 1). That will give you ~30x the radiation calls in the same model time and we might catch an earlier crash. |
Why does the WRF-CTSM case not crash sooner? I suspect that there's averaging related to the radiation frequency and the coupling frequency that prevents the model from crashing sooner. I will try to look into this.
I will try this, too.
I will hold off on this one since I'm seeing negative |
I have a better explanation for why the WRF-CTSM case does not crash sooner: All other negative values that I reported (including from the 1-day WRF-NOAH case) are in other atmospheric layers (i.e. > 1). I added this write statement in I am resubmitting the WRF-NOAH case to run longer, to see if this model ever gets |
WRF-CTSM simulation with negative incoming solar values reset to zero: Just submitted WRF-NOAH simulation to match the length of the two runs. |
...by the way, this didn't use the full 12 hours because it ran out of boundary conditions with this error: |
WRF-NOAH completed the 6 months. |
@billsacks with Mike @barlage 's approval I got past the negative incoming solar values discussed above, as follows. This change should go in the release to prevent unwanted crashes: diff --git a/src/cpl/lilac/lnd_import_export.F90 b/src/cpl/lilac/lnd_import_export.F90
index 313d0ce6..fa21b5e7 100644
--- a/src/cpl/lilac/lnd_import_export.F90
+++ b/src/cpl/lilac/lnd_import_export.F90
@@ -326,13 +326,29 @@ contains
call shr_sys_abort( subname//&
' ERROR: Longwave down sent from the atmosphere model is negative or zero' )
end if
- if ( (atm2lnd_inst%forc_solad_grc(g,1) < 0.0_r8) .or. &
- (atm2lnd_inst%forc_solad_grc(g,2) < 0.0_r8) .or. &
- (atm2lnd_inst%forc_solai_grc(g,1) < 0.0_r8) .or. &
- (atm2lnd_inst%forc_solai_grc(g,2) < 0.0_r8) ) then
- call shr_sys_abort( subname//&
- ' ERROR: One of the solar fields (indirect/diffuse, vis or near-IR)'// &
- ' from the atmosphere model is negative or zero' )
+ if (atm2lnd_inst%forc_solad_grc(g,1) < 0.0_r8) then
+ write(iulog,*) 'WARNING from lnd_import_export.F90: field solad(1) from atm model is < 0 and reset here to 0'
+ atm2lnd_inst%forc_solad_grc(g,1) = 0._r8
+ end if
+ if (atm2lnd_inst%forc_solad_grc(g,2) < 0.0_r8) then
+ write(iulog,*) 'WARNING from lnd_import_export.F90: field solad(2) from atm model is < 0 and reset here to 0'
+ atm2lnd_inst%forc_solad_grc(g,2) = 0._r8
+ end if
+ if (atm2lnd_inst%forc_solai_grc(g,1) < 0.0_r8) then
+ write(iulog,*) 'WARNING from lnd_import_export.F90: field solai(1) from atm model is < 0 and reset here to 0'
+ atm2lnd_inst%forc_solai_grc(g,1) = 0._r8
+ end if
+ if (atm2lnd_inst%forc_solai_grc(g,2) < 0.0_r8) then
+ write(iulog,*) 'WARNING from lnd_import_export.F90: field solai(2) from atm model is < 0 and reset here to 0'
+ atm2lnd_inst%forc_solai_grc(g,2) = 0._r8
end if |
@slevisconsulting I'm hesitant about making the change you suggested above in the general-purpose lilac cap: @ekluzek and @dlawrenncar have both pointed out that we really want to have error checks like this in the cap to ensure that atmosphere models are sending values with the correct sign. My impression is that there is some WRF issue whereby it sometimes sends negative solar radiation. If so, I'd suggest that the change be made in the WRF-specific code, so that WRF itself sets these negative fluxes to 0 before sending them to lilac. Does that seem like a reasonable alternative? If so, @negin513 I'd ask if you can add this to the list of changes you make in the WRF code. |
@slevisconsulting Regarding your March 10 post: |
Thank you, @weiwangncar , this helps. Pls see my response about taux and tauy below.
Thank you for confirming:
Copying and pasting from CTSM's comments:
Yes, thank you for confirming these! |
@slevisconsulting Thanks for the definition of tau. Like Z0M, WRF model currently does not expect taux and tauy from a land model. Regarding Z0 or Z0M, does CTSM consider this as an input, or output variable? |
@weiwangncar here's what we're doing:
So CTSM receives atmospheric reference height (zlvl) as an input and sends the roughness length (z0) as an output. |
@slevisconsulting Any idea how does the land model compute z0m? Or does the land model have its own input parameters that set the z0m value? If either of these is true, it seems that WRF should have it for consistency purpose. It would be interesting to see the difference between what WRF has and what comes out of the land model. |
I think I'm missing something here and need some clarification. LILAC's swnet is a lnd -> atm field, set to CTSM's |
@billsacks my mistake. I had put FSDS and SWDOWN alongside swnet in my notes (I don't remember why, except maybe I was confirming that FSDS = SWDOWN) and ended up confusing myself. |
@weiwangncar according to the clm5 technote: The momentum roughness length is The vegetation roughness lengths are a function of plant height adjusted for canopy density following Zeng and Wang (2007) |
@slevisconsulting I didn't realize CLM computes its own exchange coefficients. That probably means that WRF should skip the call to surface layer physics for land points. This may also mean that WRF would want to have u10 and v10 returned from the land model. |
I feel like this is something we discussed with @barlage . There are some possibly-relevant notes here: https://github.com/ESCOMP/CTSM/wiki/Meeting-Notes-2020-Software#fields-possibly-needed-by-wrf-with-various-schemes |
@weiwangncar I pinged you again in my Mar 10 post with a new question. I'm sharing here in case it's difficult to sift through the Mar 10 post due to its length: lwup: ctsm FIRE = lilac lwup vs wrf ?? Not passed to WRF but passing tsk. Does WRF calculate the upward longwave flux from tsk @weiwangncar ? ** |
The answer is yes. The atmospheric radiation schemes use the skin temp.
…On Tue, Jun 23, 2020 at 2:59 PM Samuel Levis ***@***.***> wrote:
@weiwangncar <https://github.com/weiwangncar> I pinged you again in my
Mar 10 post with a new question. I'm sharing here in case it's difficult to
sift through the Mar 10 post due to its length:
lwup: ctsm FIRE = lilac lwup *vs* wrf ?? *Not passed to WRF* but passing
tsk. Does WRF calculate the upward longwave flux from tsk @weiwangncar
<https://github.com/weiwangncar> ? **
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#911 (comment)>, or
unsubscribe
<https://github.com/notifications/unsubscribe-auth/ADA57EFAEWYOQYK6UZMJ343RYEJRNANCNFSM4KPMUCNA>
.
|
Great, thank you @weiwangncar . Good news: |
Great @slevisconsulting - thank you! Feel free to close this issue if you feel there is nothing more to be done here. |
We have to discuss diagnostics variables needed in wrfoutput files. More discussions on diagnostics variables is in PR #915 |
We should go through all of the variables sent from CTSM to WRF, confirming that they have the correct units and sign conventions. We should also double-check that they match the appropriate variables in CTSM - i.e., that the plumbing has been set up correctly so that a given variable
foo
in CTSM ends up appearing as the corresponding variablefoo
(with different name but same meaning) in WRF.The text was updated successfully, but these errors were encountered: