Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Release/gfsda.v16.3.0 #456

Merged

Conversation

RussTreadon-NOAA
Copy link
Contributor

@RussTreadon-NOAA RussTreadon-NOAA commented Aug 23, 2022

A review of GFS v16 bugzillas in issue #137 resulted in several updates to code, scripts, and fix files in release/gfsda.v16.3.0. The authoritative release/gfsda.v16.3.0 was forked into Russ.Treadon-NOAA/GSI. Bugzilla updates have been committed to the forked release/gfsda.v16.3.0. These changes are detailed in #137.

Merger of this PR the authoritative release/gfsda.v16.3.0 closes issue #137. Closure of issue #137 does not imply closure of the associated NCO bugzillas. EIB and the GFS v16.3.0 DA POC should follow up with NCO to see which GFS bugzillas may be closed.

@RussTreadon-NOAA
Copy link
Contributor Author

Parallel Test
Conduct operational resolution parallel on Cactus covering 2021110806 through 2021110900 using warm start ICs from the EMC retro1-v16-ecf. The DA component for this test was built from Russ.Treadon-NOAA:release/gfsda.v16.3.0. The installation set PRUNE_4NCO=YES in g-w sorc/build_gsi.sh. Comparison of test output from 2021110900 with the control shows results to be identical.

Given this result change this PR from draft to active.

Tagging @emilyhcliu , @EdwardSafford-NOAA , @aerorahul , @dtkleist for awareness.

@RussTreadon-NOAA RussTreadon-NOAA marked this pull request as ready for review August 24, 2022 12:06
Copy link
Contributor

@aerorahul aerorahul left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am ok approving these changes for v16.3.
Do we need to discuss w/ NCO if these changes/fixes to bugzilla are acceptable after code delivery?
I can't imagine why they would not.

Are the utilities and monitoring changes also tested in the short parallel?

Also, these same changes should be pulled into develop of the respective repos when possible.

Thanks for addressing the Bugzilla issues.

@RussTreadon-NOAA
Copy link
Contributor Author

MinMon output was not saved due to path error. Let me correct this and rerun MinMon. OznMon and RadMon successfully ran to completion.

Since this was a cycled parallel from 2021110806 through 2021110900 all jobs in the g-w rocoto workflow were executed except the following:

  • all waves jobs off: DO_WAVE="NO"
  • all downstream jobs off: DO_BUFRSND="NO", DO_GEMPAK="NO", DO_AWIPS="NO", WAFSF="NO"

DA apps executed in the EMC retro1-v16-ecf are executed in the rocoto workflow. This includes DA utilities (e.g., calc_analysis.x, getsigensmeanp_smooth.x, recentersigp.x, ...) as well as the main DA executables, gsi.x and enkf.x.

@emilyhcliu
Copy link
Contributor

@RussTreadon-NOAA @aerorahul I am reviewing the changes.

@emilyhcliu
Copy link
Contributor

@RussTreadon-NOAA @aerorahul , and @KateFriedman-NOAA
I received an e-mail from Steve Earle this morning about the gfs.v16.3.0 code delivery. He mentioned that he noticed that Russ is making changes to address bugzillas for GSI. He wants to be sure that Russ's bugzillas fixes are making into the gfs.v16.3.0.

@RussTreadon-NOAA
Copy link
Contributor Author

DA Monitor output

After correcting the path error for MinMon, reran control and test MinMon jobs. Output from the two parallels is identical.

Comparison of OznMon and RadMon output is more involved given that output is saved in compressed tarballs. Uncompress and untar output. Control and test OznMon output is identical.

This statement is NOT true for RadMon output. Not all test and control RadMon output is identical. Two types of difference explained below were found.

  1. Given the change to util/Radiance_Monitor/nwprod/gdas_radmon/fix/gdas_radmon_scaninfo.txt the xdef line in the angle.viirs*.ctl files differ. The control has

xdef 221 linear 0.0 1.0

whereas the test has

xdef 90 linear -56.3 1.3

  1. The test and control angle and bcor binary .ieee_d files differ. This is due to changes related to the treatment of bifix(angord+1). angle_bias.f90 has

cor_fixang(1) = data_chan(j)%bifix(angord+1)

Element bifix(angord+1) does not appear to be defined for netcdf radiance diagnostic files. The -check all test failed when attempting to access this memory location due to a shape mismatch error. The change committed to the forked release/gfsda.v16.3.0 was to restrict the bifix(angord+1) line to binary diagnostic files. The revised code has

cor_fixang(1) = 0.0
if (.not.netcdf) cor_fixang(1) = data_chan(j)%bifix(angord+1)

The same change was made to bcor.f90.

This explains differences between control and test angle and bcor RadMon output.

The question to be answered is "Which output is correct?" The answer to this question does NOT impact cycled results. Impact is limited to RadMon angle and bcor ieee_d output.

@emilyhcliu
Copy link
Contributor

@RussTreadon-NOAA @aerorahul Do you have any other changes you want to make for this PR? I reviewed the changes and they looked good to me. Can I go ahead and merge this PR to the authoritative release/gfsda.v16.3.0?

@emilyhcliu
Copy link
Contributor

@RussTreadon-NOAA Find one more fix to add for the radiance monitoring: gdas_radmon_satype.txt
VIIRS sensors were added. However, avhrr_metop-c and avhrr_metop-a are missing.

@RussTreadon-NOAA
Copy link
Contributor Author

@RussTreadon-NOAA Find one more fix to add for the radiance monitoring: gdas_radmon_satype.txt VIIRS sensors were added. However, avhrr_metop-c and avhrr_metop-a are missing.

Great, please correct as appropriate. Long term we should see if it is possible for RadMon to use fix/global_scaninfo.txt. Then we only need to update a single scaninfo fix file.

@RussTreadon-NOAA
Copy link
Contributor Author

RadMon test

As a test, undo the data_chan(j)%bifix(angord+1) changes in verf_radang.fd/angle_bias.f90 and verf_radbcor.fd/bcor.f90. Rebuild release/gfsda.v16.3.0 apps and rerun 2021110900 gdas vrfy. Compare resulting RadMon output with control output. The only differences are in the viirs files. This is expected due to the changes in gdas_radmon_scaninfo.txt.

Print statements added to bcor.f90 show find 1.1210388E-43 sometimes in this memory location while at other times the value is 0.0000000E+00. Other values might be possible in other runs. With the changes committed to the forked release/gfsda.v16.3.0 a value of 0.0 is always loaded into cor_fixang(1). This is preferable to the various values seen loaded in the control run.

@RussTreadon-NOAA
Copy link
Contributor Author

@RussTreadon-NOAA @aerorahul Do you have any other changes you want to make for this PR? I reviewed the changes and they looked good to me. Can I go ahead and merge this PR to the authoritative release/gfsda.v16.3.0?

No additional changes to release/gfsda.v16.3.0 from me. Changes I made were only in response to bugzillas and the promotion of crtm/2.4.0 to an operational location.

@emilyhcliu
Copy link
Contributor

@RussTreadon-NOAA Find one more fix to add for the radiance monitoring: gdas_radmon_satype.txt VIIRS sensors were added. However, avhrr_metop-c and avhrr_metop-a are missing.

Great, please correct as appropriate. Long term we should see if it is possible for RadMon to use fix/global_scaninfo.txt. Then we only need to update a single scaninfo fix file.
@RussTreadon-NOAA Is it good that we include the changes for gdas_radmon_satype.txt in this PR?

@RussTreadon-NOAA
Copy link
Contributor Author

RussTreadon-NOAA commented Aug 24, 2022

@RussTreadon-NOAA Find one more fix to add for the radiance monitoring: gdas_radmon_satype.txt VIIRS sensors were added. However, avhrr_metop-c and avhrr_metop-a are missing.

Great, please correct as appropriate. Long term we should see if it is possible for RadMon to use fix/global_scaninfo.txt. Then we only need to update a single scaninfo fix file.
@RussTreadon-NOAA Is it good that we include the changes for gdas_radmon_satype.txt in this PR?

The change to gdas_radmon_scaninfo.txt I made was in response to errors detected by execution of RadMon executables built with -check all. As such, this change was in response to a bugzilla.

@emilyhcliu
Copy link
Contributor

@RussTreadon-NOAA I found that we do not have IASI MetOp-C in enkf update script. IASI MetOp-C is assimilated in operational GSI. The iasi metop-c is included in v16.3.0 enkf update script. We do need to find a way to have multiple entries for the same information.

@emilyhcliu
Copy link
Contributor

Another thing I found is that we do not have iasi metop-c added to the enkf update script

@RussTreadon-NOAA Find one more fix to add for the radiance monitoring: gdas_radmon_satype.txt VIIRS sensors were added. However, avhrr_metop-c and avhrr_metop-a are missing.

Great, please correct as appropriate. Long term we should see if it is possible for RadMon to use fix/global_scaninfo.txt. Then we only need to update a single scaninfo fix file.
@RussTreadon-NOAA Is it good that we include the changes for gdas_radmon_satype.txt in this PR?

The change to gdas_radmon_sattype.txt I made was in response to errors detected by execution of RadMon executables built with -check all. As such, this change was in response to a bugzilla.

You do not have change for gdas_radmon_sattype.txt. Your changes is in gdas_radmon_scaninfo.txt. But, I see your point, this PR is for bugzilla fixes. I will fix the gdas_radmon_sattype.txt in another PR.

@RussTreadon-NOAA
Copy link
Contributor Author

OK. My mistake in not reading carefully your comment. Your call as to whether or not to include the gdas_radmon_sattype.txt fix in this PR.

@RussTreadon-NOAA
Copy link
Contributor Author

RussTreadon-NOAA commented Aug 24, 2022

@emilyhcliu , I do not understand your enkf_update comment with regards to iasi metop-c . I see iasi_metop-c in scripts/exgdas_enkf_update.sh on line 352:

sattypes_rad(71)= 'iasi_metop-c', dsis(71)= 'iasi_metop-c',

A check of enkfgdas.20211129/12/atmos/gdas.t12z.enkfstat from the retro1-v16-ecf shows iasi_metop-c data being read and processed

nid001055.cactus.wcoss2.ncep.noaa.gov 2:    69               abi_g16  num_obs_tot=   1177710
nid001055.cactus.wcoss2.ncep.noaa.gov 2:    71          iasi_metop-c  num_obs_tot=   1616988
nid001055.cactus.wcoss2.ncep.noaa.gov 2:    72           viirs-m_npp  num_obs_tot=   1705881

and

           amsua_n18  15  2605  0.267E+00  0.526E+01  0.377E+01  0.140E+01  0.350E+01
        iasi_metop-c  16  4353 -0.837E-01  0.491E+00  0.138E+01  0.103E+00  0.138E+01
        iasi_metop-c  38  4109 -0.252E-01  0.397E+00  0.724E+00  0.751E-01  0.720E+00
        iasi_metop-c  49  4353 -0.258E-01  0.361E+00  0.654E+00  0.748E-01  0.650E+00

What am I missing?

Do your iasi_metop-c comments apply to GFS v16.2 operations or the GFS v16.3.0 implementation? A check of GFS v16.2 operations echos your statement: GFS v16.2 assimilates iasi_metop-c in the gsi.x but not in the enkf.x. This is not consistent. For GFS v16.3.0 both gsi.x and enkf.x assimilate iasi_metop-c. This is consistent.

@emilyhcliu
Copy link
Contributor

@emilyhcliu , I do not understand your enkf_update comment with regards to iasi metop-c . I see iasi_metop-c in scripts/exgdas_enkf_update.sh on line 352:

sattypes_rad(71)= 'iasi_metop-c', dsis(71)= 'iasi_metop-c',

A check of enkfgdas.20211129/12/atmos/gdas.t12z.enkfstat from the retro1-v16-ecf shows iasi_metop-c data being read and processed

nid001055.cactus.wcoss2.ncep.noaa.gov 2:    69               abi_g16  num_obs_tot=   1177710
nid001055.cactus.wcoss2.ncep.noaa.gov 2:    71          iasi_metop-c  num_obs_tot=   1616988
nid001055.cactus.wcoss2.ncep.noaa.gov 2:    72           viirs-m_npp  num_obs_tot=   1705881

and

           amsua_n18  15  2605  0.267E+00  0.526E+01  0.377E+01  0.140E+01  0.350E+01
        iasi_metop-c  16  4353 -0.837E-01  0.491E+00  0.138E+01  0.103E+00  0.138E+01
        iasi_metop-c  38  4109 -0.252E-01  0.397E+00  0.724E+00  0.751E-01  0.720E+00
        iasi_metop-c  49  4353 -0.258E-01  0.361E+00  0.654E+00  0.748E-01  0.650E+00

What am I missing?

Do your iasi_metop-c comments apply to GFS v16.2 operations or the GFS v16.3.0 implementation? A check of GFS v16.2 operations echos your statement: GFS v16.2 assimilates iasi_metop-c in the gsi.x but not in the enkf.x. This is not consistent. For GFS v16.3.0 both gsi.x and enkf.x assimilate iasi_metop-c. This is consistent.

I checked /lfs/h1/ops/prod/packages/gfs.v16.2.2/scripts/exgdas_enkf_update.sh.
I did not see iasi metop-c in the name list, but we do assimilate iasi metop-c in the operational GSI.

@emilyhcliu
Copy link
Contributor

OK. My mistake in not reading carefully your comment. Your call as to whether or not to include the gdas_radmon_sattype.txt fix in this PR.

Is it ok to fix gdas_radmon_sattype.txt in this PR? (adding avhrr_metop-a and avhrr_metop-c)

@RussTreadon-NOAA
Copy link
Contributor Author

Additional ctest -all testing

Run 2021110900 gdas with forked release/gfsda.v16.3.0 built with -check all,noarg_temp_created Both eobs and anal abort with the error

nid001408.cactus.wcoss2.ncep.noaa.gov 369: forrtl: severe (408): fort: (2): Subscript #1 of the array DATA has value 27 which is greater than the upper bound of 26

Image              PC                Routine            Line        Source
gsi.x              0000000005862EAF  Unknown               Unknown  Unknown
gsi.x              0000000003DADCBD  w_setupsetupw_mp_        1821  setupw.f90
gsi.x              0000000003DA5887  w_setup_mp_setupw        1396  setupw.f90

The line in question from setupw.f90 is

           ! AMVQ Mitigated winds                                                                                                                 
           call nc_diag_metadata("Mitigation_flag_AMVQ",    sngl(data(iamvq,i))    )

iamvq is set to 27. However, when processing FL_HDOB uv data, we only have 26 elements in the data array. Hence the out of bounds failure.

We do not have FL_HDOB data every cycle. The 2021110806 cycle did not have any FL_HDOB data so this out of bounds array condition was not found. Logic needs to be added to setupw.f90 so that we do not access iamvq for non-AMV data or expand data to 27 elements even for FL_HDOB uv. The latter option is wasteful.

Tagging @ilianagenkova for awareness.

@RussTreadon-NOAA
Copy link
Contributor Author

The following change to src/gsi/setupw.f90 allows the -check all build to get past the previous point of failure in the 2021110900 gdas eobs and anal.

           ! AMVQ Mitigated winds                                                                                                                 
           if (nele>=iamvq) then
              call nc_diag_metadata("Mitigation_flag_AMVQ",    sngl(data(iamvq,i))    )
           else
              call nc_diag_metadata("Mitigation_flag_AMVQ",    sngl(-9.0) )
           endif

The else block could be omitted. This is preferable since there are cases (e..g, FL_HDOB) for which we don't have a value for AMVQ. If we don't have a value, there's nothing to write to the diagnostic file.

Integer nele is passed via the subroutine calling list. We could bypass this by checking the size of the first dimension of data via size(data,dim=1).

I'm not comfortable in calling this a solution. It's more accurately an engineered way around an out of bounds array.

@RussTreadon-NOAA
Copy link
Contributor Author

Not writing a missing value for Mitigation_flag_AMVQ when no value is available results in the following warning message being written to stdout for the 2021110900 case

nid003001.cactus.wcoss2.ncep.noaa.gov 602:  ** WARNING: Amount of data written in Mitigation_flag_AMVQ (17984)
             differs from variable Station_ID (18202)!
nid002765.cactus.wcoss2.ncep.noaa.gov 583:  ** WARNING: Amount of data written in Mitigation_flag_AMVQ (19928)
             differs from variable Station_ID (19968)!

Given this, add back the

           else
              call nc_diag_metadata("Mitigation_flag_AMVQ",    sngl(-9.0)          )
          endif

section.

What changes, if any, do we want to make to setupw.f90 for GFS v16.3.0? I don't like the above set of changes but they do allow gsi.x to run to completion with -check all.

Thoughts @HaixiaLiu-NOAA , @ilianagenkova , and @emilyhcliu ?

@ilianagenkova
Copy link
Contributor

@RussTreadon-NOAA, I see this only now, we already emailed about it - see #463

Mitigation_flag_AMVQ is an odd variable - it was needed in preparation for the G-17 mitigated AMVs product (to address LoopHeatPipe issue), but now NESDIS leans towards switching to GOEA-18 AMVs product and abandoning the intermediate "mitigated AMVs" product. If/when that happens , the Mitigation_flag_AMVQ could be pulled out of GSI, so it won't add t diag file size unnecessarily.

When adding Mitigation_flag_AMVQ to GSI, I did wonder if we should add it to the binary diag or to the netcdf diag only, or both. I anded up adding it to both, and now it's only causing problems.

@emilyhcliu
Copy link
Contributor

Not writing a missing value for Mitigation_flag_AMVQ when no value is available results in the following warning message being written to stdout for the 2021110900 case

nid003001.cactus.wcoss2.ncep.noaa.gov 602:  ** WARNING: Amount of data written in Mitigation_flag_AMVQ (17984)
             differs from variable Station_ID (18202)!
nid002765.cactus.wcoss2.ncep.noaa.gov 583:  ** WARNING: Amount of data written in Mitigation_flag_AMVQ (19928)
             differs from variable Station_ID (19968)!

Given this, add back the

           else
              call nc_diag_metadata("Mitigation_flag_AMVQ",    sngl(-9.0)          )
          endif

section.

What changes, if any, do we want to make to setupw.f90 for GFS v16.3.0? I don't like the above set of changes but they do allow gsi.x to run to completion with -check all.

Thoughts @HaixiaLiu-NOAA , @ilianagenkova , and @emilyhcliu ?

I think there are two options:
(1) The Mitigation_flag_AMVQ is used for diagnostic purpose for checking if the data is mitigated or not. We do not need it for assimilation. We can just remove this mitigation flag.
(2) Adopt Russ's solution. It does not impact result, doesn't it?

@RussTreadon-NOAA
Copy link
Contributor Author

Thanks, @emilyhcliu , for your comments.

  1. If the suggestion is to remove Mitigation_flag_AMVQ from the diagnostic file, I can not comment on whether or not this is an acceptable solution. We need to check with others to learn the reason(s) why this flag was added to the uv diagnostic file.
  2. Loading -9 into the Mitigation_flag_AMVQ slot when it is not available alters the uv diagnostic file but this does not alter gsi.x analysis results. I find no reference to this field in src/enkf and so the change should not alter enkf.x results.

@ilianagenkova
Copy link
Contributor

@emilyhcliu is correct, the flag is not used in the assimilation process. It was used for diagnostics only (i.e. to decide if we need to alter the QC for the mitigated winds). The conclusion was that there is no need for special QC steps for the mitigated winds. And recent communication with NESDIS tells that the mitigation approach will not be implemented.

@RussTreadon-NOAA
Copy link
Contributor Author

@ilianagenkova , is your recommendation that we remove Mitigation_flag_AMVQ from the diagnostic file and revert allnreal=27 back to nreal=26?

@ilianagenkova
Copy link
Contributor

@RussTreadon-NOAA Given the timeline of v16.3 implementation and NESDIS plans, yes, that's a reasonable move forward. Just to be clear, I only increased nreal for uv in read_satwnd aand read_prepbufr.

@RussTreadon-NOAA
Copy link
Contributor Author

@RussTreadon-NOAA Given the timeline of v16.3 implementation and NESDIS plans, yes, that's a reasonable move forward. Just to be clear, I only increased nreal for uv in read_satwnd aand read_prepbufr.

@ilianagenkova , @HaixiaLiu-NOAA , @emilyhcliu
At which hash did these changes enter develop? The safest approach here is to see exactly what changed. More than just read_prepbufr.f90 and read_satwnd.f90 were changed. Changes were also made to setupw.f90. Anywhere else? Comparing the hashes immediately prior to and the hash at which these changes entered will show us what and where things were changed.

@MichaelLueken
Copy link
Contributor

@RussTreadon-NOAA
These changes entered develop at affe4ed. Please see PR #294 for all four files modified with this commit.

@RussTreadon-NOAA
Copy link
Contributor Author

@RussTreadon-NOAA These changes entered develop at affe4ed. Please see PR #294 for all four files modified with this commit.

Thank you, @MichaelLueken-NOAA !

@RussTreadon-NOAA
Copy link
Contributor Author

Using PR #294 as a guide, changes related to the addition of Mitigation_flag_AMVQ were manually removed from read_prepbufr.f90, read_satwnd.f90, and setupw.f90. DA apps were recompiled with -check all and the 2021110900 gdaseobs, gdasanal, and gfsanal rerun followed by gdasediag, gdaseupd, and gdasdiag. While the gdaseobs gsistat shows no difference the gdasanal and gfsanal gsistat files differ with respect to their controls. This is odd. I need to review the local modifications made to read_prepbufr.f90, read_satwnd.f90, and setupw.f90.

@RussTreadon-NOAA
Copy link
Contributor Author

The modified read_prepbufr.f90, read_satwnd.f90, and setupw.f90 were retained and DA apps recompiled without the -check all flag. Instead the production compilation options were used. The gdas and gfs analysis jobs were run for 2021110900. The resulting gsistat and atminc.nc files show no difference with respect to runs using the original, unaltered code.

While this is the result we expect and hope to see, it is disturbing that building and running with -check all does not yield reproducible behavior with respect to its control. The -check all test did confirm that gsi.x and enkf.x can run to completion without Mitigation_flag_AMVQ in the code. Let me try the -check all reproducibility test again.

@RussTreadon-NOAA
Copy link
Contributor Author

Revert read_prepbufr.f90 and read_satwnd.f90 back to unaltered state. Revert setupw.f90 to unaltered state and then comment out the line which causes -check all to fail

call nc_diag_metadata("Mitigation_flag_AMVQ", sngl(data(iamvq,i)) )

Recompile with -check all and rerun 2021110900 gdasanal and gfsanal. Let jobs finish, rewind and reboot using the same gsi.x from the first run.

The minimization from the second run differs from the first run. The Initial cost function from the two runs are identical. The Initial gradient norm differ as does the subsequent minimization. For example, gdasanal has

1st run

 J Global                    1.9842299320840521E+06
 End Jo table inner/outer loop           0           1
Initial cost function =  1.984229932084051892E+06
Initial gradient norm =  4.495314784540472829E+03
cost,grad,step,b,step? =   1   0  1.984229932084051892E+06  4.495314784540472829E+03  1.995061581029207831E+00  0.000000000000000000E+00  good
cost,grad,step,b,step? =   1   1  1.926376967643912183E+06  3.252416365797234903E+03  2.855174316806918355E+00  1.106862639083623367E+00  good

2nd run

 J Global                    1.9842299320840521E+06
 End Jo table inner/outer loop           0           1
Initial cost function =  1.984229932084051892E+06
Initial gradient norm =  4.495314784551821504E+03
cost,grad,step,b,step? =   1   0  1.984229932084051892E+06  4.495314784551821504E+03  1.995061581039473175E+00  0.000000000000000000E+00  good
cost,grad,step,b,step? =   1   1  1.926376967643514741E+06  3.252416363770153566E+03  2.855174317492084501E+00  1.106862638505872409E+00  good

Apparently compiling with -check all alters the build such that the resulting gsi.x does not yield bitwise identical output from run to run. This is disturbing!

Given that repeated runs of the production build reproduce one another as well as reproduce the unaltered code, we can say that in production mode the proposed changes to read_prepbufr.f90, read_satwnd.f90, and setupw.f90 do not alter analysis results.

@RussTreadon-NOAA
Copy link
Contributor Author

Rerun 2021110900 enkf, gdas, and gfs using forked release/gfsda.v16.3.0 at 955dae3.

Analysis results reproduce control. Conventional diagnostic files differ between test and control because 955dae3 removes Mitigation_flag_AMVQ from the uv diagnostic file.

Tagging @ilianagenkova and @emilyhcliu for awareness.

@RussTreadon-NOAA
Copy link
Contributor Author

I recommend modifying g-w release/gfs.v16.3.0 sorc/build_gsi.sh so that it prunes pieces of release/gfsda.v16.3.0 which are not used in operations. As noted in #137 pruning the sorc/gsi.fd is part of addressing NCO bugzillas.

To prune sorc/gsi.fd upon the build activate the PRUNE_4NCO="YES" line in sorc/build_gsi.sh by removing the # character on the PRUNE_4NCO line so that it reads

# If NCO wants to build, uncomment this line                                                                                                                     
export PRUNE_4NCO="YES"                                                                                                                                         

@RussTreadon-NOAA
Copy link
Contributor Author

Discover that repeated executions of sorc/build_gsi.sh with PRUNE_4NCO="YES" result in build failure. The second and subsequent executions of sorc/build_gsi.sh with pruning active fail because the directories in sorc/gsi.fd to prune have already been removed. Fix this by adding a logical test to see if the file or directory exists before attempting to prune. This change committed to forked release/gfsda.v16.3.0 at 36481a2

Note: If one wants to restore pruned files and directories, execute ush/prune_4nco_global.sh restore in sorc/gsi.fd

@RussTreadon-NOAA
Copy link
Contributor Author

The forked release/gfsda.v16.3.0 at 36481a2 was installed in g-w release/gfs.v16.3.0 on Cactus. The modified source code files

        src/gsi/stub_wrf_binary_interface.f90
        src/gsi/stub_wrf_netcdf_interface.f90

from PR #464 were included in the local copy of release/gfsda.v16.3.0 prior to the build and install. The export PRUNE_4NCO="YES" line in g-w sorc/build_gsi.sh was activate during the build.

An operational resolution parallel was cycled from 2021110806 through 2021110900. Analysis and forecast files from the enkfgdas, gdas, and gfs at 2021110900 were compared with the control using the authoritative release/gfsda.v16.3.0. Analysis and forecast results are bitwise identical between the two runs. Note that cnvstat files in the new run differ from the control because the forked release/gfsda.v16.3.0 does not write Mitigation_flag_AMVQ to the uv diagnostic file. This variable is not used by the cycled system and so its removal does not impact cycled results.

The changes in PR #456 are ready to be merged into the authoritative release/gfsda.v16.3.0

@emilyhcliu
Copy link
Contributor

@RussTreadon-NOAA I will merge this PR to authoritative release/gfsda.v16.3.0 first, and then PR #464.
I will ask @MichaelLueken-NOAA to re-tag release/v16.3.0.

One question.
There is a list of bugzillas in Issue #137.
There are a total of 12 Bugzillas, and 10 of them are checked. Can we claim that we have addressed 10/12 bugzillas, and the remaining two will be deferred to v17 implementation?

@RussTreadon-NOAA
Copy link
Contributor Author

@RussTreadon-NOAA I will merge this PR to authoritative release/gfsda.v16.3.0 first, and then PR #464. I will ask @MichaelLueken-NOAA to re-tag release/v16.3.0.

One question. There is a list of bugzillas in Issue #137. There are a total of 12 Bugzillas, and 10 of them are checked. Can we claim that we have addressed 10/12 bugzillas, and the remaining two will be deferred to v17 implementation?

Unfortunately the answer is no. Several of the bugzillas refer to the GFS, not just the DA component of the GFS. I addressed the DA piece of those bugzillas. This is not sufficient to close these bugzillas. A few bugzillas are DA specific. NCO, EIB, and the GFS v16.3.0 DA POC should discuss these to see if the response is sufficient for NCO to close these bugzillas.

Tomorrow (8/29) I will add a table to the end of #137 separating GFS and DA only buzillas.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

5 participants