Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

8 day forecast starting 2019-08-29 #86

Closed
jedwards4b opened this issue Feb 13, 2020 · 66 comments
Closed

8 day forecast starting 2019-08-29 #86

jedwards4b opened this issue Feb 13, 2020 · 66 comments

Comments

@jedwards4b
Copy link
Collaborator

Here is the complete output: /glade/scratch/jedwards/ufstest/run I don't have enough experience with this model to know if it's any good or not - @arunchawla-NOAA who should look at it?

@arunchawla-NOAA
Copy link
Collaborator

arunchawla-NOAA commented Feb 13, 2020 via email

@jedwards4b
Copy link
Collaborator Author

It only took about 30 minutes. @rsdunlapiv can you copy this directory to hera?

@arunchawla-NOAA
Copy link
Collaborator

@jedwards4b given the NSST setting issue, maybe we need to rerun this case. But first lets ensure that the regression tests pass

@rsdunlapiv
Copy link
Collaborator

I started the copy to Hera:
/scratch1/NCEPDEV/nems/Rocky.Dunlap/ufs8day.tar.gz
It looks like it will take another hour to complete the transfer.

@pjpegion
Copy link
Collaborator

I'm trying to run the dorian test case on cheyenne with the gnu compliler. and I'm running into issues.
First of all, the workflow is looking for data in /glade/p/cesmdata/cseg/ufs_inputdata/icfiles/gfsanl/gfs.20190829/00 but the data is really in
/glade/p/cesmdata/cseg/ufs_inputdata/icfiles/gfsanl/201908/20190829/

When I changed the path in src/model/FV3/cime/cime_config/buildnml I now get the following error when running case.submit (or ./check_input_data):

Loading input file list: 'Buildconf/ufsatm.input_data_list'
Traceback (most recent call last):
File "./check_input_data", line 76, in
_main_func(doc)
File "./check_input_data", line 71, in _main_func
chksum=chksum) else 1)
File "/glade/work/pegion/ufs-mrweather-app/cime/scripts/Tools/../../scripts/lib/CIME/case/check_input_data.py", line 166, in check_all_input_data
input_data_root=input_data_root, data_list_dir=data_list_dir, chksum=chksum and chksum_found)
File "/glade/work/pegion/ufs-mrweather-app/cime/scripts/Tools/../../scripts/lib/CIME/case/check_input_data.py", line 321, in check_input_data
if iput_ic_root and input_ic_root in full_path
NameError: global name 'iput_ic_root' is not defined

@rsdunlapiv
Copy link
Collaborator

@arunchawla-NOAA the transfer has completed:
/scratch1/NCEPDEV/nems/Rocky.Dunlap/ufs8day.tar.gz

@uturuncoglu
Copy link
Collaborator

@pjpegion this is already fixed. there was typo problem in the file. Could you update the CIME to the latest of remotes/origin/ufs_release_v1.0 branch. This will fix the problem.

@jedwards4b
Copy link
Collaborator Author

@pjpegion we are in the process of debugging this case now, we are not ready for you to run it unless you want to help figure out the problem. Thanks,

@jedwards4b
Copy link
Collaborator Author

Continued from email discussion.
I am now using the data from the file gfs_4_20190829_0000_000.grb2
I have made the following changes from the default namelist:

nfhmax_hf=6
nfhout=6
nfhout_hf=6
nstf_name = 0,0,0,0,0
convert_nst = .false.

And the model is blowing up

15:MPT: #1  0x00002aaf778fcdb6 in mpi_sgi_system (
15:MPT: #2  MPI_SGI_stacktraceback (
15:MPT:     header=header@entry=0x7ffdd0d80a40 "MPT ERROR: Rank 15(g:15) received signal SIGSEGV(11).\n\tProcess ID: 42759, Host: r6i0n4, Program: /glade/scratch/jedwards/ufstest/bld/ufs.exe\n\tMPT Version: HPE MPT 2.19  02/23/19 05:30:09\n")
15:MPT:     at sig.c:340
15:MPT: #3  0x00002aaf778fcfb2 in first_arriver_handler (signo=signo@entry=11, 
15:MPT:     stack_trace_sem=stack_trace_sem@entry=0x2aaf852c0080) at sig.c:489
15:MPT: #4  0x00002aaf778fd34b in slave_sig_handler (signo=11, 
15:MPT:     siginfo=<optimized out>, extra=<optimized out>) at sig.c:564
15:MPT: #5  <signal handler called>
15:MPT: #6  sfc_nst_mp_sfc_nst_run_ ()
15:MPT:     at /glade/scratch/jedwards/ufstest/bld/atm/obj/FV3/ccpp/physics/physics/sfc_nst.f:365
15:MPT: #7  0x0000000000b3bce4 in ccpp_fv3_gfs_v15p2_physics_cap_mp_fv3_gfs_v15p2_physics_run_cap_ ()
15:MPT:     at /glade/scratch/jedwards/ufstest/bld/atm/obj/FV3/ccpp/physics/ccpp_FV3_GFS_v15p2_physics_cap.F90:592
15:MPT: #8  0x0000000000b07fda in ccpp_static_api_mp_ccpp_physics_run_ ()
15:MPT:     at /glade/scratch/jedwards/ufstest/bld/atm/obj/FV3/ccpp/physics/ccpp_static_api.F90:150
15:MPT: #9  0x0000000000b09976 in ccpp_driver_mp_ccpp_step_ ()
15:MPT:     at /glade/scratch/jedwards/ufstest/bld/atm/obj/FV3/ccpp/driver/CCPP_driver.F90:234
15:MPT: #10 0x00000000004c02ec in atmos_model_mod_mp_update_atmos_radiation_physics_ ()
15:MPT:     at /glade/scratch/jedwards/ufstest/bld/atm/obj/FV3/atmos_model.F90:364
15:MPT: #11 0x00000000004b6ab3 in module_fcst_grid_comp_mp_fcst_run_phase_1_ ()
15:MPT:     at /glade/scratch/jedwards/ufstest/bld/atm/obj/FV3/module_fcst_grid_comp.F90:708
15:MPT: #12 0x00002aaf72d0b509 in ESMCI::FTable::callVFuncPtr(char const*, ESMCI::VM*, int*) ()
15:MPT:    from /glade/p/ral/jntp/GMTB/tools/NCEPLIBS-ufs-v1.0.0.alpha01/intel-18.0.5/mpt-2.19/lib64/libesmf.so
15:MPT: #13 0x00002aaf72d0f0db in ESMCI_FTableCallEntryPointVMHop ()

@jedwards4b
Copy link
Collaborator Author

I tried nstf_name = 0,1,1,0,5 - this also crashes in the same way.
I went back to nstf = 2,1,1,0,5 and the model runs to completion.

@jedwards4b
Copy link
Collaborator Author

@arunchawla-NOAA Can we get the post chgres_cube files for this case and try running from that so that we can determine if the problem is there?

@jedwards4b
Copy link
Collaborator Author

@BinLiu-NOAA What do you suggest next.

@BinLiu-NOAA
Copy link
Collaborator

@jedwards4b I am not sure why in your test, nstf_name = 0,1,1,0,5 or nstf_name = 0,0,0,0,0 could not work. But, if nstf = 2,1,1,0,5 worked, you probably want to double check if the NSST related output fields are correct. I would suggest you checking with @GeorgeGayno-NOAA and xu.li@noaa.gov at EMC to see if chgres_cube and the NSST component of ufs-weather-model are working properly.

Also, to be related, did you also ran a test by using NEMSIO format GFS file? If so, that simulation can serve as a kind of control experiment for the one initialized by using grib2 format GFS files.

Bin

@uturuncoglu
Copy link
Collaborator

@BinLiu-NOAA We used grib2 input in this case. We don't have NEMSIO files for same date (dorian case) because the NOMADS server has only last 10 days. If you know the public place that we could get NEMSIO files and use them to run the model, we could try. If you or @GeorgeGayno-NOAA have recent run (dorian case) that uses chores, grib2 combination and works with this options, please share the namelist files and also input files, then we could make some comparison. Otherwise, it could be hard the find the source of the problem for us without deep experience with NSST, model and chgres.

@BinLiu-NOAA
Copy link
Collaborator

@jedwards4b For your reference, I posted below for the namelist files for my chgres_cube and forecast job for my C96 grib2 test. They are on Hera, together with the chgres_cube and forecast results, if you happen to have access there. Meanwhile, just for clarification, I was using early versions of UFS_UTILS's chgres and the ufs-weather-model (more specifically the HAFS application). But, I don't think it makes much difference here.

Bin

more /scratch1/NCEPDEV/hwrf/scrub/Bin.Liu/ufs_utils_grib2/HAFS_uniform_chgres_g2new_C96_2019082900_05L.work/fort.41
&config
mosaic_file_target_grid="/scratch1/NCEPDEV/hwrf/scrub/Bin.Liu/ufs_utils_grib2/chgres_driver/../HAFS_uniform_grid_C96/C96/C96_mosaic.nc"
fix_dir_target_grid="/scratch1/NCEPDEV/hwrf/scrub/Bin.Liu/ufs_utils_grib2/chgres_driver/../HAFS_uniform_grid_C96/C96"
orog_dir_target_grid="/scratch1/NCEPDEV/hwrf/scrub/Bin.Liu/ufs_utils_grib2/chgres_driver/../HAFS_uniform_grid_C96/C96"
orog_files_target_grid="C96_oro_data.tile1.nc","C96_oro_data.tile2.nc","C96_oro_data.tile3.nc","C96_oro_data.tile4.nc","C96_oro_data.tile5.nc","C96_oro_data.tile6.nc"
vcoord_file_target_grid="/scratch1/NCEPDEV/hwrf/save/Bin.Liu/hafs_201910/fix/fix_am/global_hyblev.l65.txt"
mosaic_file_input_grid="NULL"
orog_dir_input_grid="NULL"
orog_files_input_grid="NULL"
data_dir_input_grid="./"
atm_files_input_grid="gfs.t00z.pgrb2.1p00.f000"
sfc_files_input_grid="gfs.t00z.pgrb2.1p00.f000"
grib2_file_input_grid="gfs.t00z.pgrb2.1p00.f000"
varmap_file="/scratch1/NCEPDEV/hwrf/scrub/Bin.Liu/ufs_utils_grib2/chgres_driver/../hafs_utils.fd/parm/varmap_tables/FV3GFSphys_var_map.txt"
cycle_mon=08
cycle_day=29
cycle_hour=00
convert_atm=.true.
convert_sfc=.true.
convert_nst=.false.
input_type="grib2"
tracers="sphum","liq_wat","o3mr"
tracers_input="spfh","clwmr","o3mr"
regional=0
halo_bndy=0
/

more /scratch1/NCEPDEV/hwrf/scrub/Bin.Liu/ufs_utils_grib2/HAFS_uniform_forecast_g2new_C96_2019082900_05L/input.nml
&amip_interp_nml
interp_oi_sst = .true.
use_ncep_sst = .true.
use_ncep_ice = .false.
no_anom_sst = .false.
data_set = 'reynolds_oi'
date_out_of_range = 'climo'
/

&atmos_model_nml
blocksize = 32
chksum_debug = .false.
dycore_only = .false.
fdiag = 3
avg_max_length = 3600.
fhmax = 240
fhout = 3
fhmaxhf = 0
fhouthf = 3
/

&diag_manager_nml
prepend_date = .false.
/

&fms_io_nml
checksum_required = .false.
max_files_r = 100,
max_files_w = 100,
/

&fms_nml
clock_grain = 'ROUTINE',
domains_stack_size = 120000000,
print_memory_usage = .false.
/

&fv_grid_nml
!grid_file = 'INPUT/grid_spec.nc'
/

&fv_core_nml
!layout = 12,12
!layout = 8,8
layout = 8,8
io_layout = 1,1
npx = 97
npy = 97
ntiles = 6
npz = 64
!grid_type = -1
make_nh = .F.
fv_debug = .F.
range_warn = .T.
reset_eta = .F.
n_sponge = 10
nudge_qv = .T.
nudge_dz = .F.
tau = 10.
rf_cutoff = 7.5e2
d2_bg_k1 = 0.15
d2_bg_k2 = 0.02
kord_tm = -9
kord_mt = 9
kord_wz = 9
kord_tr = 9
hydrostatic = .F.
phys_hydrostatic = .F.
use_hydro_pressure = .F.
beta = 0.
a_imp = 1.
p_fac = 0.1
k_split = 2
n_split = 6
nwat = 6
na_init = 1
d_ext = 0.0
dnats = 1
fv_sg_adj = 450
d2_bg = 0.
nord = 2
dddmp = 0.1
d4_bg = 0.12
vtdm4 = 0.02
delt_max = 0.002
ke_bg = 0.
do_vort_damp = .T.
external_ic = .T.
external_eta = .T.
gfs_phil = .false.
nggps_ic = .T.
mountain = .F.
ncep_ic = .F.
d_con = 1.0
hord_mt = 5
hord_vt = 5
hord_tm = 5
hord_dp = -5
hord_tr = 8
adjust_dry_mass = .F.
consv_te = 1.
do_sat_adj = .T.
consv_am = .F.
fill = .T.
dwind_2d = .F.
print_freq = 3
warm_start = .F.
no_dycore = .false.
z_tracer = .T.
agrid_vel_rst = .true.
read_increment = .F.
res_latlon_dynamics = "fv3_increment.nc"
write_3d_diags = .true.
/

&surf_map_nml
zero_ocean = .F.
cd4 = 0.15
cd2 = -1
n_del2_strong = 0
n_del2_weak = 15
n_del4 = 2
max_slope = 0.4
peak_fac = 1.
/

&external_ic_nml
filtered_terrain = .true.
levp = 65
gfs_dwinds = .true.
checker_tr = .F.
nt_checker = 0
/

&gfs_physics_nml
fhzero = 3.
ldiag3d = .false.
lradar = .true.
avg_max_length = 3600.
h2o_phys = .true.
fhcyc = 24.
use_ufo = .true.
pre_rad = .false.
ncld = 5
imp_physics = 11
pdfcld = .false.
fhswr = 3600.
fhlwr = 3600.
ialb = 1
iems = 1
iaer = 111
ico2 = 2
isubc_sw = 2
isubc_lw = 2
isol = 2
lwhtr = .true.
swhtr = .true.
cnvgwd = .true.
shal_cnv = .true. !Shallow convection
cal_pre = .false.
redrag = .true.
dspheat = .true.
hybedmf = .true.
moninq_fac = -1.0
satmedmf = .false.
random_clds = .false.
trans_trac = .true.
cnvcld = .true.
imfshalcnv = 2
imfdeepcnv = 2
cdmbgwd = 3.5, 0.25
sfc_z0_type = 6
prslrd0 = 0.
ivegsrc = 1
isot = 1
debug = .false.
nst_anl = .true.
nstf_name = 0,0,0,0,0
psautco = 0.0008, 0.0005
prautco = 0.00015, 0.00015
iau_delthrs = 6
iaufhrs = 30
iau_inc_files = ''
do_deep = .true.
lgfdlmprad = .true.
effr_in = .true.
/

&gfdl_cloud_microphysics_nml
sedi_transport = .true.
do_sedi_heat = .false.
rad_snow = .true.
rad_graupel = .true.
rad_rain = .true.
const_vi = .F.
const_vs = .F.
const_vg = .F.
const_vr = .F.
vi_max = 1.
vs_max = 2.
vg_max = 12.
vr_max = 12.
qi_lim = 1.
prog_ccn = .false.
do_qa = .true.
fast_sat_adj = .true.
tau_l2v = 225.
tau_v2l = 150.
tau_g2v = 900.
rthresh = 10.e-6 ! This is a key parameter for cloud water
dw_land = 0.16
dw_ocean = 0.10
ql_gen = 1.0e-3
ql_mlt = 1.0e-3
qi0_crt = 8.0E-5
qs0_crt = 1.0e-3
tau_i2s = 1000.
c_psaci = 0.05
c_pgacs = 0.01
rh_inc = 0.30
rh_inr = 0.30
rh_ins = 0.30
ccn_l = 300.
ccn_o = 100.
c_paut = 0.5
c_cracw = 0.8
use_ppm = .false.
use_ccn = .true.
mono_prof = .true.
z_slope_liq = .true.
z_slope_ice = .true.
de_ice = .false.
fix_negative = .true.
icloud_f = 1
mp_time = 150.
/

&interpolator_nml
interp_method = 'conserve_great_circle'
/

&namsfc
FNGLAC = "global_glacier.2x2.grb",
FNMXIC = "global_maxice.2x2.grb",
FNTSFC = "RTGSST.1982.2012.monthly.clim.grb",
FNSNOC = "global_snoclim.1.875.grb",
FNZORC = "igbp"
!FNZORC = "global_zorclim.1x1.grb",
FNALBC = "global_snowfree_albedo.bosu.t1534.3072.1536.rg.grb",
FNALBC2 = "global_albedo4.1x1.grb",
FNAISC = "CFSR.SEAICE.1982.2012.monthly.clim.grb",
FNTG3C = "global_tg3clim.2.6x1.5.grb",
FNVEGC = "global_vegfrac.0.144.decpercent.grb",
FNVETC = "global_vegtype.igbp.t1534.3072.1536.rg.grb",
FNSOTC = "global_soiltype.statsgo.t1534.3072.1536.rg.grb",
FNSMCC = "global_soilmgldas.t1534.3072.1536.grb",
FNMSKH = "seaice_newland.grb",
FNTSFA = "",
FNACNA = "",
FNSNOA = "",
FNVMNC = "global_shdmin.0.144x0.144.grb",
FNVMXC = "global_shdmax.0.144x0.144.grb",
FNSLPC = "global_slope.1x1.grb",
FNABSC = "global_mxsnoalb.uariz.t1534.3072.1536.rg.grb",
LDEBUG =.true.,
FSMCL(2) = 99999
FSMCL(3) = 99999
FSMCL(4) = 99999
FTSFS = 90
FAISS = 99999
FSNOL = 99999
FSICL = 99999
FTSFL = 99999
FAISL = 99999
FVETL = 99999,
FSOTL = 99999,
FvmnL = 99999,
FvmxL = 99999,
FSLPL = 99999,
FABSL = 99999,
FSNOS = 99999,
FSICS = 99999,
/
&nam_stochy
/
&nam_sfcperts
/

@uturuncoglu
Copy link
Collaborator

@BinLiu-NOAA Thanks for the namelist files. I'll check them. In the meantime, i think it is better to test with the version that we are using because there could be an issue related with CHGRES or the model stability.

  • The model - ufs-v1.0.0.beta01
  • NCEPLIBS - 1.0.0alpha01

It could be easy test to use CHGRES from 1.0.0alpha01 and try to generate input files and run the model.

@uturuncoglu
Copy link
Collaborator

BTW, your CHGRES namelist has lots of options and i think most of them are in default value. Right? Anyway, we could try to use your namelist to see what happens.

@uturuncoglu
Copy link
Collaborator

Is this process grib2? How we could access the input data? Is it in a public place? You are also setting tracers and tracers_input and I think those are not required for grib2 input.

@pjpegion
Copy link
Collaborator

@rsdunlapiv Thanks, it works now
@jedwards4b My run with the gnu compiler is complete. I will compare my results with yours.

@pjpegion
Copy link
Collaborator

@jedwards4b my run was with the gfs analysis files not the new "canned winds" file that Kate Friedman supplied. Do you still have the results of your run that started this thread?

@BinLiu-NOAA
Copy link
Collaborator

BTW, your CHGRES namelist has lots of options and i think most of them are in default value. Right? Anyway, we could try to use your namelist to see what happens.

@uturuncoglu, please use your own versions of namelist files. I posted mine just for your reference (because you asked). Again, as I mentioned, my tests were based on early versions of UFS_UTILS's chgres_cube and ufs-weather-model.

@jedwards4b
Copy link
Collaborator Author

@rsdunlapiv Can you tar the directory /scratch1/NCEPDEV/hwrf/scrub/Bin.Liu/ufs_utils_grib2/HAFS_uniform_forecast_g2new_C96_2019082900_05L/ on hera and transfer to cheyenne:

To confirm please make sure the file gfs.t00z.pgrb2.1p00.f000 is there.

We also need
/scratch1/NCEPDEV/hwrf/scrub/Bin.Liu/ufs_utils_grib2/HAFS_uniform_chgres_g2new_C96_2019082900_05L.work/INPUT

Thanks

@uturuncoglu
Copy link
Collaborator

@BinLiu-NOAA JFYI, i compared your input.nml with CIME generated one and I found following differences,

domains_stack_size 120000000 (3000000)
iau_delthrs 6 (3)
iaufhrs 30 (-1)
lradar .true. (.false.)
moninq_fac -1.0 (1.0)
sfc_z0_type 6 (0)
max_slope 0.4 (0.15)
n_del2_weak 15 (12)
n_del4 2 (1)
zero_ocean .false. (.true.)

In this case the values indicated by () are the CIME ones. So, we have lots of differences. We were using following documents to set defaults for mr-wearther model

v16beta - https://docs.google.com/document/d/1bLbVdWgEIknDQZgTuOZ6IPVEGv5jUgOrCm4GrR96oBU/edit
v15p2 - https://docs.google.com/document/u/1/d/1EKc2mAld5VsrNjTRgqUcTVG1ZcEIkllA-NrAKUs4DWI/edit

The above list of options even do not exist in the google docs and we are using defaults from the source code. This indicates that there is no common namelist file in EMC side. This makes hard to find the source of the problem. I think we need to use same

  • model version
  • namelists (input.nml, model_configure, fort.41 etc.)
  • same input
  • ...

Besides these, we also tested those options and the model still failing in the same place. We are still investigating the source of the problem.

@rsdunlapiv
Copy link
Collaborator

@BinLiu-NOAA is this a configuration that is only expected to work for the HAFS (hurricane) configuration? The focus of the release is global medium-range weather, so if HAFS has a lot of application specific changes, it may not be suitable for the release at this time.

@arunchawla-NOAA
Copy link
Collaborator

@junwang-noaa and @KateFriedman-NOAA can you help here. The namelists that @jedwards4b has based on the documentation provided is different from the one that @BinLiu-NOAA is running.

Can we identifty what those differences signify? We want to understand why the runs are blowing up

@jedwards4b
Copy link
Collaborator Author

The problem seems to be strongly linked to turning OFF NSST.

@arunchawla-NOAA
Copy link
Collaborator

@jedwards4b just for clarification, do the runs work when you use the namelists from @BinLiu-NOAA ?

@jedwards4b
Copy link
Collaborator Author

The namelists from @BinLiu-NOAA are from a different version of both the model and the chgres there appear to be namelist variables defined that we don't have and so we can't just copy the namelist. We tried to pick out all the differences that we could find and run with that - the model died in exactly the same way.

@arunchawla-NOAA
Copy link
Collaborator

OK we will get together tomorrow at EMC and get back to you

@jedwards4b
Copy link
Collaborator Author

This file gfs.t00z.pgrb2.1p00.f000 is the input used by the @BinLiu-NOAA case. It doesn't have the same name as the file on the ftp site gfs_4_20190829_0000_000.grb2. We haven't been able to find the file on hera to confirm whether it is or is not the same file.

@arunchawla-NOAA
Copy link
Collaborator

So here is the summary of a long discussion between EMC, NCAR and DTC

  1. The flags that CIME team is using are correct with a few minor tweaks
    a) When using NEMSIO use nstf_name = 2,0,0,0,0 as this will use NSST fields from the data stored in NEMSIO
    b) When using grib2 use nstf_name = 2,1,0,0,0 as this will spin up NSST

  2. If we do not want to use NSST then we have to set nstf_name=0,0,0,0,0 but also change the CCPP suite definition file [the confusion at EMC was that we only had to do the former in IPD]

  3. We are still determining if the appropriate thing with GRIB2 data is to turn NSST off (hence change the suite definition) or let it spin up, knowing the results will not be completely accurate. We will do a couple of runs

Bottom line is CIME has the right flags we just need to decide using 2,0,0,0,0 for NEMSIO (Jim will do a test to see how this works)

All other differences in namelists were a red herring as we were testing with different codes and should be ignored

We still have a decision to make for NSST with grib2 data and there is a related ticket for that

#87

@jedwards4b
Copy link
Collaborator Author

I have created PRs NOAA-EMC/fv3atm#67
and ESCOMP/FV3GFS_interface#5
to resolve this issue given that the 8 day output is acceptable.

@junwang-noaa
Copy link

@jedwards4b Why do we need PRs NOAA-EMC/fv3atm#67? I thought we are resolving the issue using method 1) in Arun's email.

@jedwards4b
Copy link
Collaborator Author

jedwards4b commented Feb 16, 2020 via email

@jedwards4b
Copy link
Collaborator Author

@arunchawla-NOAA You were going to provide an ncl script to verify the dorian test case.

@arunchawla-NOAA
Copy link
Collaborator

we have sample ncl scripts for review. We want to check in other systems

@ceceliadid
Copy link

@jedwards4b Which directories/files do you need to get from https://ftp.emc.ncep.noaa.gov/EIB/UFS/
as input files for the Dorian case?

@jedwards4b
Copy link
Collaborator Author

@ceceliadid just this one: https://ftp.emc.ncep.noaa.gov/EIB/UFS/inputdata/201908/20190829/gfs_4_20190829_0000_000.grb2

If you want the complete list of fix files that's a lot more complicated to provide.

@ceceliadid
Copy link

@jedwards4b Great, thanks. Yup, I need the fix files too to set up the dorian example on Stampede. It would be really nice for the non-pre-configured platforms to have another tarball in the ftp directory like the simple-test-case. Are you using the same fix files that are in that simple-test-case directory? Or pulling them out of global/fix?

@jedwards4b
Copy link
Collaborator Author

jedwards4b commented Mar 1, 2020 via email

@ceceliadid
Copy link

Okay, so if I'm on Stampede, CIME will go and get the fix files it needs from the EMC ftp site. I didn't understand that. During what CIME step (e.g. setup, build) are the .input_data_list files supposed to be created, and when is check_input_data called (assuming that's what gets the files)? Going into ./case.build the Buildconf directory has nothing in it for me - does that suggest an error?

@jedwards4b
Copy link
Collaborator Author

jedwards4b commented Mar 1, 2020 via email

@ceceliadid
Copy link

On Stampede should I be setting the --machine argument for create_newcase to linux? It looks like that's what the instructions say to do. But I get ERROR: No machine linux found

@ceceliadid
Copy link

Nevermind, I realized I probably have to go to the conf_machines.xml file and deal with porting CIME to a new machine to get the example to work on Stampede.

The documentation in 5.2 should probably say set machine to "centos7-linux" instead of linux if that's going to be the name of the more generic linux platform.

I see that there is a stampede2-skx entry in conf_machines.xml, with lots of hardcoded Jim stuff in it. Would it make sense to make that more general and offer a Stampede template that people could start with?

@jedwards4b
Copy link
Collaborator Author

jedwards4b commented Mar 1, 2020 via email

@ceceliadid
Copy link

@jedwards4b @climbfuji That doesn't work, I think because of the same permissions issue we ran into before. It looks like the case.setup goes to the config_machines.xml file with all the hardcoded paths in it. It is setting locations like DIN_LOC_ROOT following the values in the Stampede section, so DIN_LOC_ROOT is set to: /work/01118/tg803972/stampede2/UFS/ufs_inputdata

That's the same UFS directory that Dom and I couldn't read before and I get a read error now as well. Does the Stampede2 app build depend on reading into that UFS directory?

Or maybe I'm doing something wrong?

@jedwards4b
Copy link
Collaborator Author

jedwards4b commented Mar 2, 2020 via email

@ceceliadid
Copy link

Thank you for answering so many questions over the weekend!
I'm using the command in the quick start:
git clone -b ufs-v1.0.0.alpha02 https://github.com/ufs-community/ufs-mrweather-app.git my_ufs_sandbox

I guess I should be leaving off the tag?
git clone https://github.com/ufs-community/ufs-mrweather-app.git my_ufs_sandbox

For UFS_SCRATCH and UFS_INPUT I exported them as in the quickstart.

@jedwards4b
Copy link
Collaborator Author

jedwards4b commented Mar 2, 2020 via email

@jedwards4b
Copy link
Collaborator Author

On cheyenne - I ran test SMS_Ld2.C96.GFSv15p2.cheyenne_intel --workflow ufs-mrweather
and plotted the 48 hour results for comparison against ftp.emc.ncep.noaa.gov/EIB/UFS/visualization_example/Dorian_C96_noNSST

I think that the ncl script needs more documentation -
I needed to make a subdirectory plots and run with

edit the script and set MaxLen=17

ncl 'Model="UFS"' 'initDate=2019082900' 'Dir="/glade/scratch/jedwards/SMS_Ld2.C96.GFSv15p2.cheyenne_intel.20200302_080140_u0rgyu/run/"' t2_wind10_GFS_grb.ncl 

@ceceliadid
Copy link

I can build now on Stampede. I was just about to put in the 48 hour run.
I would like to test the plotting script once you have it.

@ligiabernardet
Copy link
Collaborator

ligiabernardet commented Mar 2, 2020 via email

@ligiabernardet
Copy link
Collaborator

ligiabernardet commented Mar 2, 2020 via email

@arunchawla-NOAA
Copy link
Collaborator

@jedwards4b this issue seems to have morphed into a testing status update mechanism. Since the original issue has been resolved suggest closing this issue

@jedwards4b
Copy link
Collaborator Author

Yes I ran on cheyenne and was able to reproduce the plots from the ftp site

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests