Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

JEDI increment write to cubed sphere history #983

Merged
merged 13 commits into from
Mar 28, 2024

Conversation

DavidNew-NOAA
Copy link
Collaborator

@DavidNew-NOAA DavidNew-NOAA commented Mar 20, 2024

This PR, a companion to Global Workflow PR #2420 changes the variational YAML for JEDI to write to cubed sphere history rather than the Gaussian grid. With the new changes to Global Workflow, the new gdas_fv3jedi_jediinc2fv3.x OOPS app will read the JEDI increment from the cubed sphere history, compute the FV3 increment, and interpolate/write it the the Gaussian grid. The only meaningful difference is that the internal calculations, namely computation of the hydrostatic layer thickness increment, will be computed on the native grid rather than on the Gaussian grid, before interpolation rather than after. This makes more sense physically. Eventually the FV3 increment will be written and read to/from cubed sphere history anyway.

@DavidNew-NOAA
Copy link
Collaborator Author

Need to do a touch more testing before re-opening

@DavidNew-NOAA DavidNew-NOAA reopened this Mar 27, 2024
@DavidNew-NOAA
Copy link
Collaborator Author

OK, re-opening. I thought there was an error, but things looks good.

@CoryMartin-NOAA
Copy link
Contributor

@DavidNew-NOAA before merging, do you have any plots/statistics summarizing your testing that show this produces comparable results?

@DavidNew-NOAA
Copy link
Collaborator Author

@CoryMartin-NOAA

@DavidNew-NOAA before merging, do you have any plots/statistics summarizing your testing that show this produces comparable results?

I compared the min, max, and std of the delp and delz increment with the new OOPS app and with the old Python script. The relative error of the delp and delz increments was 10^-7 and 10^-2 respectively for each these three statistics.

@CoryMartin-NOAA CoryMartin-NOAA merged commit 18ba5da into develop Mar 28, 2024
5 checks passed
@CoryMartin-NOAA CoryMartin-NOAA deleted the feature/jediinc2fv3 branch March 28, 2024 14:36
danholdaway added a commit that referenced this pull request Apr 8, 2024
* origin/develop:
  Use <filesystem> on a non c++17 supported machine (WCOSS ACORN) (#1026)
  Change generate_com to declare_from_tmpl (#1025)
  Commenting out more of the marine bufr 2 ioda stuff (#1018)
  make driver consistent with workflow driver (#1016)
  Update hashes now that GSI-B is working for EnVar (#1015)
  Add GitHub CLI to path for CI (#1014)
  Use _anl rather than _ges dimensions for increments in FV3 increment converter YAML (#1013)
  Fix inconsistent VIIRS preprocessing test (#1012)
  remove gdas_ prefix from executable filename in test_gdasapp_fv3jedi_fv3inc (#1010)
  Bugfix on Broken GHRSST Ioda Converter (#1004)
  Moved the marine converters to a "safe" place (#1007)
  restore ATM local ensemble ctest functionality (#1003)
  Add BUFR2IODA python API converter to prepoceanobs task (#914)
  Remove sst's from obs proc (#1001)
  JEDI increment write to cubed sphere history (#983)
  [End- to End Test code sprint] Add SEVIRI METEOSAT-8 and METEOSAT-11 to end-to-end testing (#766)
aerorahul pushed a commit to NOAA-EMC/global-workflow that referenced this pull request Apr 23, 2024
This PR, a companion to GDASApp PR
[#983](NOAA-EMC/GDASApp#983), creates a new
Rocoto job called "atmanlfv3inc" that computes the FV3 atmosphere
increment from the JEDI variational increment using a JEDI OOPS app in
GDASApp, called fv3jedi_fv3inc.x, that replaces the GDASApp Python
script, jediinc2fv3.py, for the variational analysis. The "atmanlrun"
job is renamed "atmanlvar" to better reflect the role it plays of
running now one of two JEDI executables for the atmospheric analysis
jobs.

Previously, the JEDI variational executable would interpolate and write
its increment, during the atmanlrun job, to the Gaussian grid, and then
the python script, jediinc2fv3.py, would read it and then write the FV3
increment on the Gaussian grid during the atmanlfinal job. Following the
new changes, the JEDI increment will be written directly to the cubed
sphere. Then during the atmanlfv3inc job, the OOPS app will read it and
compute the FV3 increment directly on the cubed sphere and write it out
onto the Gaussian grid.

The reason for writing first to the cubed sphere grid is that otherwise
the OOPS app would have to interpolate twice, once from Gaussian to
cubed sphere before computing the increment and then back to the
Gaussian, since all the underlying computations in JEDI are done on the
native grid.

The motivation for this new app and job is that eventually we wish to
transition all intermediate data to the native cubed sphere grid, and
the OOPS framework allows us the flexibility to read and write to/from
any grid format we wish by just changing the YAML configuration file
rather than hardcoding. When we do switch to the cubed sphere, it will
be an easy transition. Moreover, it the computations the OOPS app will
be done with a compiled executable rather than an interpreted Python
script, providing some performance increase.

It has been tested with a cycling experiment with JEDI in both Hera and
Orion to show that it runs without issues, and I have compared the FV3
increments computed by the original and news codes. The delp and
hydrostatic delz increments, the key increments produced during this
step, differ by a relative error of 10^-7 and 10^-2 respectively. This
difference is most likely due to the original python script doing its
internal computation on the interpolated Gaussian grid, while the new
OOPS app does its computations on the native cubed sphere before
interpolating the the Gaussian grid.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants