Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Check variables sent from CTSM to WRF #911

Closed
billsacks opened this issue Feb 3, 2020 · 38 comments
Closed

Check variables sent from CTSM to WRF #911

billsacks opened this issue Feb 3, 2020 · 38 comments
Assignees
Labels
investigation Needs to be verified and more investigation into what's going on.

Comments

@billsacks
Copy link
Member

We should go through all of the variables sent from CTSM to WRF, confirming that they have the correct units and sign conventions. We should also double-check that they match the appropriate variables in CTSM - i.e., that the plumbing has been set up correctly so that a given variable foo in CTSM ends up appearing as the corresponding variable foo (with different name but same meaning) in WRF.

@billsacks billsacks added the investigation Needs to be verified and more investigation into what's going on. label Feb 3, 2020
@slevis-lmwg
Copy link
Contributor

slevis-lmwg commented Mar 10, 2020

In the process of updating this table with additional info...

@weiwangncar I pinged you for your input below. If you search by your handle (weiwangncar) you should find all my questions. Pls let me know if anything is unclear.

Variables listed from
ncdump test_lilac.lilac.hi.2013-04-01-86310.nc

Bill recommends that I focus on lnd --> atm variables (last two groups of variables below).

with prefix atm_to_cpl_Faxa_, cpl_lnd_atm_Faxa_
[b,o]cphi[dry,wet] atm2lnd_inst%forc_aer_grc(:,1:6) ctsm units = [kg m-1 s-1] correct?
[b,o]cphodry atm2lnd_inst%forc_aer_grc(:,2) ctsm units = [kg m-1 s-1] correct?
dst[dry,wet][1-4] atm2lnd_inst%forc_aer_grc(:,7:14) ctsm units = [kg m-1 s-1] correct?
rain[c,l] forc_rain[c,l] ctsm units = [mm s-1]
snow[c,l] forc_snow[c,l] ctsm units = [mm s-1]
sw[n,v]d[f,r] atm2lnd_inst%forc_sola[d,i]_grc ctsm units = [W m-2]
lwdn atm2lnd_inst%forc_lwrad_not_downscaled_grc ctsm units = [W m-2]

with prefix atm_to_cpl_Sa_, cpl_to_lnd_atm_Sa_
landfrac
[p,t]bot
ptem
shum
topo
u, v, z

with prefix atm_to_cpl_, cpl_to_atm_, lnd_to_cpl_atm_, lnd_to_cpl_rof_, cpl_to_lnd_atm_
lon, lat

with prefix lnd_to_cpl_rof_Flrl_
irrig
rof[gwl, i, sub, sur]

with prefix cpl_to_atm_Fall_, lnd_to_cpl_atm_Fall_
OK lat: ctsm QSOIL+QVEGE+QVEGT = wrf QFX (mm/s)
OK evap: ctsm FGEV+FCEV+FCTR = wrf LH (W/m2)
OK sen: ctsm FSH = wrf HFX (W/m2)

OK flxdst[1-4]: ctsm DSTFLXT vs lilac sum(flxdst[1:4]) vs wrf ?? Not passed to WRF. Placeholder.
OK tau[x,y]: ctsm = lilac vs wrf TAU[X,Y] Not passed to WRF. Placeholder.
OK lwup: ctsm FIRE = lilac lwup vs wrf ?? Not passed to WRF but passing tsk and WRF calculates the upward longwave flux from tsk.

with prefix cpl_to_atm_Sl_, lnd_to_cpl_atm_Sl_
OK snowh: ctsm SNOWDP .ne. lilac .ne. wrf SNOWH Not passed to WRF. Placeholder.
OK fv: ctsm (not in ctsm history) vs lilac = wrf UST (m/s)
OK tref: ctsm TSA = lilac = wrf T2 (K)
OK t: ctsm t_rad_grc vs lilac vs wrf tsk (K)
OK a[ni,vs]d[f,r]: ctsm FSR/FSDS .ne. lilac = wrf ALBEDO
OK u10: ctsm = lilac .ne. wrf sqrt(u10^2+v10^2) Not passed to WRF. Placeholder.
OK ram1: ctsm vs wrf RA Not passed to WRF. Placeholder.

OK z0m: ctsm vs wrf z0 Comes with this comment in the code:
Use momentum roughness length for both background roughness length and thermal time-varying roughness length, even though it's possible that they should differ in some way.

OK qref: ctsm Q2M = lilac vs wrf Q2 Comes with this comment in the code:
Convert from specific humidity to mixing ratio. Note that qref is specific humidity at 2m, whereas qsfc is supposed to be specified right at the surface. So there isn't a perfect correspondence between the two, but given that qsfc is just being used as a diagnostic quantity when using CTSM (for now), we won't worry about this.

@slevis-lmwg
Copy link
Contributor

@billsacks @barlage

Our 4-month WRF-CTSM simulation stopped on May 29th, so almost two months in, with this error in rsl.out.0015 :

(lnd_import_export:import_fields) ERROR: One of the solar fields (indirect/diffuse, vis or near-IR) from the atmosphere model is negative or zero

Would you like me to investigate further, or is this something one of you should look into? If you need access to this run's output, pls look here:
/glade/scratch/slevis/git_wrf/WRF/run/results/long/ctsm_w_wrf_ic

Meanwhile, I just submitted the 4-month WRF-NOAH simulation to see if it stops with the same issue.

@billsacks
Copy link
Member Author

@slevisconsulting I don't feel that I'd be able to track this down easily. So I'd like to let you and/or @barlage look into it. Thanks.

@slevis-lmwg
Copy link
Contributor

slevis-lmwg commented Apr 1, 2020

@barlage @negin513
Update:
The WRF-NOAH simulation ran for the full 12 hours and completed APR, MAY, and JUN. Last history file written:
wrfout_d01_2013-07-27_20:00:00
Files located here:
/glade/scratch/slevis/git_wrf/WRF/run/results/long/noah_no_init_snow

The WRF-CTSM simulation failed with last file written:
wrfout_d01_2013-05-29_16:00:00
with the error msg that I posted above). I will see if I can identify the variable that triggers the error.

@slevis-lmwg
Copy link
Contributor

I modified the error checking in lnd_import_export to tell me the specific variable name and value that triggers the error and started the WRF-CTSM simulation from the beginning.

@barlage let me know if you have any time-saving insights.
@negin513 feel free to work with the output from the WRF-NOAH simulation, while we debug the WRF-CTSM simulation.

@dlawrenncar
Copy link
Contributor

dlawrenncar commented Apr 1, 2020 via email

@barlage
Copy link

barlage commented Apr 1, 2020

My first check would be the albedo, is it possible that it is going above 1? After the radiation scheme runs, I believe this line calculates the SWDOWN field that's going into the LSM:

https://github.com/wrf-model/WRF/blob/7df4461d0c7e2dd86fa83309172054bd9a396100/phys/module_radiation_driver.F#L2585

@slevis-lmwg
Copy link
Contributor

@barlage I'm trying the following change in module_sf_ctsm.F:

- if (abs(albedo(i,j) - 1.) < 1.e-5) then
+ if (albedo(i,j) >= 0.99999) then

If the simulation doesn't stop, then the error was due to
albedo > 1.00001

I'm open to comments.

@billsacks
Copy link
Member Author

billsacks commented Apr 1, 2020

Ugh. The lack of restart capability is really going to kill us on debugging. We may need to discuss this.

FYI, there are currently a few pieces needed in order to get restarts to work within WRF. The pieces I know of are:

  1. Appropriately signaling to CTSM that it should start up from a restart file. This may be as easy as setting start_type = continue in ctsm.cfg, then regenerating the lnd_in file and running - as long as there is an rpointer.lnd file and associated restart file present in the run directory. However, there may be other things needed.

  2. Writing a restart file from CTSM at the appropriate time. This requires some understanding of how to get the correct flags from WRF (Enable restarts in WRF-LILAC-CTSM coupling #876): see:

https://github.com/billsacks/WRF/blob/55faee764c4c65d878ace0c68cf7da3a07c03e0c/phys/module_sf_ctsm.F#L548-L551

  1. Ensuring that restarts are actually bit-for-bit. When Mariana tested this with the demo atm driver in December, restarts differed at the roundoff level, which she attributed to differences in the aerosol inputs being read by LILAC's datm-like capability: see LILAC restart are round-off level different in aerosol input fields #863

@slevis-lmwg
Copy link
Contributor

@barlage the new simulation stopped with last file written wrfout_d01_2013-04-15_19:00:00 which is more than a month earlier than the previous simulation. This time one of my new error messages was triggered:
(lnd_import_export:import_fields) ERROR: field solai(2) from the atm model is < 0

Question 1: Do you expect WRF simulations to differ when they start from the same initial conditions? (Sorry if you told me this before and I forgot...)
Question 2: Looking again at module_radiation_driver.F, and the calculation of swddif... I don't see the split between VIS and NIR. I'm starting to look elsewhere, but can you tell me quickly where that happens? My current best guess is that there may be a bug there.

@barlage
Copy link

barlage commented Apr 3, 2020

@slevisconsulting

  1. no, the simulations should not change during a re-run with the same IC
  2. does your error message also print the albedo, are you sure it is not >=1? The appropriate fields in radiation_driver are:

SWVISDIR, SWVISDIF, SWNIRDIR, SWNIRDIF

and in the radiation option you are using, they are coming from rrtmg_swrad

note that we are not currently sending the 4-component albedo back to WRF, but we are using the 4-component radiation from WRF, which uses the mean albedo calculated in module_sf_ctsm

@slevis-lmwg
Copy link
Contributor

slevis-lmwg commented Apr 3, 2020

UPDATING THE CONTENTS OF THIS POST TO REFLECT LATEST RESULTS

@barlage @negin513 @dlawrenncar
The effect that I showed in our meeting from my 1-line change
appears to be due to rebuilding the code WITHOUT FIRST RUNNING clean -a.
The clue that has convinced me is that I repeated the WRF-NOAH simulation without clean -a and got different answers after the rebuild than original; then I repeated with clean -a first and got the same answers after the rebuild as original.

Back to the WRF-CTSM simulation that stops on 5/29 and that I have now reproduced. I have confirmed that the model stops with this error:
field solai(2) from the atm model is < 0
...but I will refrain from investigating further until I have @billsacks bug-fix. See email with this subject line:
Potentially serious bug in snow aerosols starting with ctsm1.0.dev065

@slevis-lmwg
Copy link
Contributor

Before we go down a new rabbit hole, please disregard the last post. After using clean -a my answers have changed again and this time I am able to reproduce the original crash. I will update the previous post to avoid confusion...

(Sorry.)

@slevis-lmwg
Copy link
Contributor

Unfortunately
1) Running with Bill's bug-fix made no difference: Answers are bit-for-bit the same as before and the model crashes with the same error and at the same time as before.
2) The run with output at every timestep ran out of time (12 hrs) on 5/13, so before the 5/29 failure, and we can't see what happened.

For reason (2) I will resort to including write statements in module_ra_rrtmg_sw.F after two lines:
difdnuv(i) = zuvfd(i) - dirdnuv(i)
and more importantly after the variable that fails
difdnir(i) = znifd(i) - dirdnir(i)
if (difdnir(i) < 0.) then ! begin slevis diagnostic
write(0,*) 'znifd, dirdnir, i =', znifd(i), dirdnir(i), i
end if ! end slevis diagnostic

If I find that the error comes from subtracting a dirdnir that is slightly greater than znifd, then I recommend avoiding the problem with:
difdnuv(i) = max(0., zuvfd(i) - dirdnuv(i))
difdnir(i) = max(0., znifd(i) - dirdnir(i))
This has the potential of masking more serious errors, so I could add error checks confirming very small differences. I'm open to recommendations on acceptable error thresholds.

Comments welcome.

@barlage
Copy link

barlage commented Apr 9, 2020

@slevisconsulting My main concern here is this solution will likely find resistance from the WRF community unless we can prove that it happens with other land models. It is not clear that this is a bug in RRTMG and that's the implication with your suggestion. We should also print albedo values here to see if they are reasonable (or if something looks out of bounds) when the radiation goes negative.

If you are doing simulations where you output every timestep, I suggest it might be worth running a simulation that maximizes radiation calls, i.e., change the model timestep to one minute and set the radiation scheme to activate every minute (radt = 1). That will give you ~30x the radiation calls in the same model time and we might catch an earlier crash.

@slevis-lmwg
Copy link
Contributor

@slevisconsulting My main concern here is this solution will likely find resistance from the WRF community unless we can prove that it happens with other land models.

  • WRF-CTSM with the write statements shows many cases of negative difdnir and difdnuv even before the 5/29 crash...
  • WRF-NOAH with the same write statements also shows many cases of negative difdnir and difdnuv. So this happens with at least one other land model.

Why does the WRF-CTSM case not crash sooner? I suspect that there's averaging related to the radiation frequency and the coupling frequency that prevents the model from crashing sooner. I will try to look into this.

We should also print albedo values here to see if they are reasonable (or if something looks out of bounds) when the radiation goes negative.

I will try this, too.

If you are doing simulations where you output every timestep, I suggest it might be worth running a simulation that maximizes radiation calls, i.e., change the model timestep to one minute and set the radiation scheme to activate every minute (radt = 1). That will give you ~30x the radiation calls in the same model time and we might catch an earlier crash.

I will hold off on this one since I'm seeing negative difdnir and difdnuv as early as day 1 in both runs.

@slevis-lmwg
Copy link
Contributor

slevis-lmwg commented Apr 9, 2020

I have a better explanation for why the WRF-CTSM case does not crash sooner:
The model crashes the first time that I see a negative dfdnir in the bottom atmospheric layer (on 5/29). I have taken this negative value one step back to subroutine spcvmc_sw where I find that the model crashes the first time that I see pnifd < pnifddir in the bottom atm. layer (on 5/29).

All other negative values that I reported (including from the 1-day WRF-NOAH case) are in other atmospheric layers (i.e. > 1).

I added this write statement in SUBROUTINE RRTMG_SWRAD after the four components of albedo are set equal to albedo:
if (albedo(i,j) < 0. .or. albedo(i,j) > 1.) then ! begin slevis diag
write(0,*) 'albedo, i, j =', albedo(i,j), i, j
end if ! end slevis diag
and got nothing, suggesting good albedo values in the WRF-CTSM case.

I am resubmitting the WRF-NOAH case to run longer, to see if this model ever gets pnifd < pnifddir in the bottom atm. layer.

@slevis-lmwg
Copy link
Contributor

slevis-lmwg commented Apr 10, 2020

I am resubmitting the WRF-NOAH case to run longer, to see if this model ever gets pnifd < pnifddir in the bottom atm. layer.

It does:
pnifd, pnifddir, ikl = 523.2111 523.5358 1
znifd, dirdnir, i = 523.2111 523.5358 1
Timing for main: time 2013-04-20_21:31:30 on domain 1

The albedo write statements return nothing, so the albedo remains within the correct range presumably.

WRF-NOAH SWNIRDIF on Apr 20 at 2200 UTC
Automatic colorbar scale (left) vs. manual colorbar scale to see neg. value (right)
SWNIRDIF_noah_4_20_2200
Negative value = -0.32 W/m2 at 120.76 W 27.47 N off the coast of Baja California
loc_of_neg_SWNIRDIF
...but again this is the first occurrence in WRF-NOAH.

I see a second occurrence in the WRF-NOAH case but cannot locate it in the hourly-avged history:
pnifd, pnifddir, ikl = 441.8751 451.2035 1
znifd, dirdnir, i = 441.8751 451.2035 1
mediation_integrate.G 1944 DATASET=HISTORY
mediation_integrate.G 1945 grid%id 1 grid%oid 2
d01 2013-05-09_18:00:00 This input data is not V4: OUTPUT FROM REAL_EM

@slevis-lmwg
Copy link
Contributor

WRF-CTSM simulation with negative incoming solar values reset to zero:
Completed 6 months (APR - SEP).
@negin513 output is here: /glade/scratch/slevis/git_wrf/WRF/run/results/long/ctsm_w_wrf_ic

Just submitted WRF-NOAH simulation to match the length of the two runs.

@slevis-lmwg
Copy link
Contributor

...by the way, this didn't use the full 12 hours because it ran out of boundary conditions with this error:
---- ERROR: Ran out of valid boundary conditions in file

@slevis-lmwg
Copy link
Contributor

WRF-NOAH completed the 6 months.
@negin513 output is here: /glade/scratch/slevis/git_wrf/WRF/run/results/long/noah_no_init_snow

@negin513 negin513 self-assigned this Jun 1, 2020
@slevis-lmwg
Copy link
Contributor

slevis-lmwg commented Jun 12, 2020

@billsacks with Mike @barlage 's approval I got past the negative incoming solar values discussed above, as follows. This change should go in the release to prevent unwanted crashes:

diff --git a/src/cpl/lilac/lnd_import_export.F90 b/src/cpl/lilac/lnd_import_export.F90
index 313d0ce6..fa21b5e7 100644
--- a/src/cpl/lilac/lnd_import_export.F90
+++ b/src/cpl/lilac/lnd_import_export.F90
@@ -326,13 +326,29 @@ contains
           call shr_sys_abort( subname//&
                ' ERROR: Longwave down sent from the atmosphere model is negative or zero' )
        end if
-       if ( (atm2lnd_inst%forc_solad_grc(g,1) < 0.0_r8) .or. &
-            (atm2lnd_inst%forc_solad_grc(g,2) < 0.0_r8) .or. &
-            (atm2lnd_inst%forc_solai_grc(g,1) < 0.0_r8) .or. &
-            (atm2lnd_inst%forc_solai_grc(g,2) < 0.0_r8) ) then
-          call shr_sys_abort( subname//&
-               ' ERROR: One of the solar fields (indirect/diffuse, vis or near-IR)'// &
-               ' from the atmosphere model is negative or zero' )
+       if (atm2lnd_inst%forc_solad_grc(g,1) < 0.0_r8) then
+          write(iulog,*) 'WARNING from lnd_import_export.F90: field solad(1) from atm model is < 0 and reset here to 0'
+          atm2lnd_inst%forc_solad_grc(g,1) = 0._r8
+       end if
+       if (atm2lnd_inst%forc_solad_grc(g,2) < 0.0_r8) then
+          write(iulog,*) 'WARNING from lnd_import_export.F90: field solad(2) from atm model is < 0 and reset here to 0'
+          atm2lnd_inst%forc_solad_grc(g,2) = 0._r8
+       end if
+       if (atm2lnd_inst%forc_solai_grc(g,1) < 0.0_r8) then
+          write(iulog,*) 'WARNING from lnd_import_export.F90: field solai(1) from atm model is < 0 and reset here to 0'
+          atm2lnd_inst%forc_solai_grc(g,1) = 0._r8
+       end if
+       if (atm2lnd_inst%forc_solai_grc(g,2) < 0.0_r8) then
+          write(iulog,*) 'WARNING from lnd_import_export.F90: field solai(2) from atm model is < 0 and reset here to 0'
+          atm2lnd_inst%forc_solai_grc(g,2) = 0._r8
        end if

@billsacks
Copy link
Member Author

@slevisconsulting I'm hesitant about making the change you suggested above in the general-purpose lilac cap: @ekluzek and @dlawrenncar have both pointed out that we really want to have error checks like this in the cap to ensure that atmosphere models are sending values with the correct sign.

My impression is that there is some WRF issue whereby it sometimes sends negative solar radiation. If so, I'd suggest that the change be made in the WRF-specific code, so that WRF itself sets these negative fluxes to 0 before sending them to lilac. Does that seem like a reasonable alternative? If so, @negin513 I'd ask if you can add this to the list of changes you make in the WRF code.

@weiwangncar
Copy link

@slevisconsulting Regarding your March 10 post:
SWDOWN in WRF is not the net shortwave, but the downward shortwave radiation flux. These variable names have a long history, so we are unlikely to change them. Sorry.
TAU: What is this variable? How is it defined?
U10: At the moment WRF does not expect a land model to provide updated 10 m winds. Is this what you're asking?
Z0M: Again at this time WRF does not expect updated roughness length from a land model. But it could be useful if we understand how it is calculated in the land model.

@slevis-lmwg
Copy link
Contributor

Thank you, @weiwangncar , this helps. Pls see my response about taux and tauy below.

SWDOWN in WRF is not the net shortwave, but the downward shortwave radiation flux.

Thank you for confirming:
@billsacks , WRF's SWDOWN = CTSM's FSDS should be renamed in LILAC from swnet to swdown.

TAU: What is this variable? How is it defined?

Copying and pasting from CTSM's comments:
taux is wind (shear) stress: e-w (kg/m/s2)
and
tauy is wind (shear) stress: n-s (kg/m/s
2)

U10: At the moment WRF does not expect a land model to provide updated 10 m winds. Is this what you're asking?
Z0M: Again at this time WRF does not expect updated roughness length from a land model. But it could be useful if we understand how it is calculated in the land model.

Yes, thank you for confirming these!

@weiwangncar
Copy link

@slevisconsulting Thanks for the definition of tau. Like Z0M, WRF model currently does not expect taux and tauy from a land model. Regarding Z0 or Z0M, does CTSM consider this as an input, or output variable?

@slevis-lmwg
Copy link
Contributor

@weiwangncar here's what we're doing:

zlvl(its:ite, jts:jte) = 0.5 * dz8w(its:ite, 1, jts:jte)

call export_to_lilac(lilac_a2l_Sa_z, zlvl)

call import_from_lilac(lilac_l2a_Sl_z0m, z0)

So CTSM receives atmospheric reference height (zlvl) as an input and sends the roughness length (z0) as an output.

@weiwangncar
Copy link

@slevisconsulting Any idea how does the land model compute z0m? Or does the land model have its own input parameters that set the z0m value? If either of these is true, it seems that WRF should have it for consistency purpose. It would be interesting to see the difference between what WRF has and what comes out of the land model.

@billsacks
Copy link
Member Author

SWDOWN in WRF is not the net shortwave, but the downward shortwave radiation flux.

Thank you for confirming:
@billsacks , WRF's SWDOWN = CTSM's FSDS should be renamed in LILAC from swnet to swdown.

I think I'm missing something here and need some clarification.

LILAC's swnet is a lnd -> atm field, set to CTSM's fsa_grc, which is documented in CTSM as solar rad absorbed (total) (W/m**2). It looks like this follows the same logic as is used in CESM (via CTSM's mct cap). But it doesn't look like this swnet field is actually received by WRF. So what am I missing? Can you explain the problem in more detail?

@slevis-lmwg
Copy link
Contributor

SWDOWN in WRF is not the net shortwave, but the downward shortwave radiation flux.

Thank you for confirming:
@billsacks , WRF's SWDOWN = CTSM's FSDS should be renamed in LILAC from swnet to swdown.

I think I'm missing something here and need some clarification.

LILAC's swnet is a lnd -> atm field, set to CTSM's fsa_grc, which is documented in CTSM as solar rad absorbed (total) (W/m**2). It looks like this follows the same logic as is used in CESM (via CTSM's mct cap). But it doesn't look like this swnet field is actually received by WRF. So what am I missing? Can you explain the problem in more detail?

@billsacks my mistake. I had put FSDS and SWDOWN alongside swnet in my notes (I don't remember why, except maybe I was confirming that FSDS = SWDOWN) and ended up confusing myself.

@slevis-lmwg
Copy link
Contributor

@slevisconsulting Any idea how does the land model compute z0m? Or does the land model have its own input parameters that set the z0m value? If either of these is true, it seems that WRF should have it for consistency purpose. It would be interesting to see the difference between what WRF has and what comes out of the land model.

@weiwangncar according to the clm5 technote:

The momentum roughness length is
𝑧0𝑚,𝑔 = 0.01 for soil and glaciers
𝑧0𝑚,𝑔 = 0.0024 for snow-covered surfaces (𝑓𝑠𝑛𝑜 > 0)

The vegetation roughness lengths are a function of plant height adjusted for canopy density following Zeng and Wang (2007)
𝑧0𝑚,𝑣 = 𝑧0ℎ,𝑣 = 𝑧0𝑤,𝑣 = exp [𝑉 * ln(𝑧𝑡𝑜𝑝*𝑅_𝑧0𝑚) + (1 − 𝑉 ) * ln(𝑧0𝑚,𝑔)]
where ztop is canopy top height, R_z0m is the ratio of momentum roughness length to canopy top height, and V is the fractional weight as a function of LAI + SAI (eq. 5.127).
CLM5_Tech_Note.pdf

@weiwangncar
Copy link

@slevisconsulting I didn't realize CLM computes its own exchange coefficients. That probably means that WRF should skip the call to surface layer physics for land points. This may also mean that WRF would want to have u10 and v10 returned from the land model.

@billsacks
Copy link
Member Author

I feel like this is something we discussed with @barlage . There are some possibly-relevant notes here: https://github.com/ESCOMP/CTSM/wiki/Meeting-Notes-2020-Software#fields-possibly-needed-by-wrf-with-various-schemes

@slevis-lmwg
Copy link
Contributor

@weiwangncar I pinged you again in my Mar 10 post with a new question. I'm sharing here in case it's difficult to sift through the Mar 10 post due to its length:

lwup: ctsm FIRE = lilac lwup vs wrf ?? Not passed to WRF but passing tsk. Does WRF calculate the upward longwave flux from tsk @weiwangncar ? **

@weiwangncar
Copy link

weiwangncar commented Jun 23, 2020 via email

@slevis-lmwg
Copy link
Contributor

Great, thank you @weiwangncar .

Good news:
@billsacks I have now checked all the variables that needed checking here and not encountered problems or inconsistencies.

@billsacks
Copy link
Member Author

Great @slevisconsulting - thank you! Feel free to close this issue if you feel there is nothing more to be done here.

@negin513
Copy link
Contributor

negin513 commented Sep 2, 2020

We have to discuss diagnostics variables needed in wrfoutput files. More discussions on diagnostics variables is in PR #915

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
investigation Needs to be verified and more investigation into what's going on.
Projects
None yet
Development

No branches or pull requests

6 participants