Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

List of Exceptions in JEDI-T2O Evaluation #47

Open
ADCollard opened this issue Sep 14, 2022 · 5 comments
Open

List of Exceptions in JEDI-T2O Evaluation #47

ADCollard opened this issue Sep 14, 2022 · 5 comments

Comments

@ADCollard
Copy link

List exceptions to JEDI-T2O here (GSI QC that will not be implemented in JEDI). Please link to Issue where each is documented.

@BrettHoover-NOAA
Copy link

BrettHoover-NOAA commented Sep 14, 2022

  1. (Sat)winds exception for rejections based on the GSI's qcgross check in setupw.f90. This check is intended to be reproduced in JEDI with the SatWindsSPDBCheck function, but that function may need an overhaul to work properly. As it is currently written, there is no feasible way to make SatWindsSPDBCheck work exactly the same way the GSI qcgross check works, and the entire logic behind qcgross is likely due for reconsideration. Documented in AMV/Satwind Obs Validation #40 starting here

UPDATE (4/26/2023): The exception described here has been fixed with updates to the SatWindsSPDBCheck, this can be disregarded.

@BrettHoover-NOAA
Copy link

BrettHoover-NOAA commented Oct 17, 2022

  1. (Sat)winds exception for rejections based on GSI's experr_norm check in read_satwnd.f90, L1192–1198:
                 experr_norm = 10.0_r_kind - 0.1_r_kind * ee   ! introduced by Santek/Nebuda 
                 if (obsdat(4) > 0.1_r_kind) then  ! obsdat(4) is the AMV speed
                    experr_norm = experr_norm/obsdat(4)
                 else
                    experr_norm = 100.0_r_kind
                 end if
                 if (experr_norm > 0.9_r_kind) qm=15 ! reject data with EE/SPD>0.9

This test is carried out for NESDIS winds, and relies on computing a norm as a ratio of the expected error to the wind speed, rejecting winds with a norm value >0.9. This needs to be introduced as an obsfunction since we do not have the math tools required to carry out this computation in the YAML file. Documented in #40 starting here.

UPDATE (4/26/2023): This exception has been fixed with the introduction of SatWindsErrnormCheck. The new filter allows for UFO acceptance to match GSI with the exception of 8 satwinds (16 u and v ob-variables per the UFO's accounting), all of which have an error-norm value at or near 0.9 within +/- 1.0E-07. So these remaining differences are likely caused by float precision/handling differences in the two code-bases rather than a difference in QC. Outside of these exceptions, this can be disregarded.

@BrettHoover-NOAA
Copy link

  1. There is an irreconcilable difference in UFO acceptance of scatterometer winds (ScatWinds, see Scatterometer Winds (ScatWinds) Validation #54), where the underlying disagreement is in how the surface type category is defined at the interpolated observation location. In the test-data, this results in 2 ScatWinds being accepted in GSI and rejected in UFO. For both of these ScatWinds GSI assigns an ocean-surface type (1) while UFO assigns a snow-surface (3). Each system correctly defines acceptance or rejection based on the surface type category is defined, but come to different conclusions based on differences in how the surface type category is defined.

A plot of the 2 disagreeing observations (in red) compared to all other accepted ScatWinds (in blue) shows that the disagreements are occurring near the land/surface boundary on the northern side of land masses in the arctic. So this exception probably comes down to the observation being very close to a snow/ocean interface and there being some difference in whether the point is classified as snow- or ocean-surface:
image

@BrettHoover-NOAA
Copy link

  1. Final ob-error assignments can diverge between UFO and GSI because of how each system handles the observation time-window and searches for duplicate observations / down-weighting

The GSI's time-window is more inclusive on the front-end than the UFO's time-window, so for the same 6-hour time-window the GSI will ingest more observations than the UFO. Both systems down-weight observations by assigning higher ob-errors when there are multiple observations in the same lat/lon/pressure space separated by less than an hour in time ("duplicate observations"), but in GSI this down-weighting is handled in setupw.f90 and operates on the entire set of wind observations making it through read_*.f90 initial quality-control, while in UFO the down-weighting is handled at the YAML level with ObsErrorFactorDuplicateCheck and only performs a search for duplicate observations within obs confined to the YAML's scope.

As a result, GSI is capable of finding more duplicate observations than UFO, either through ingesting more observations at the front-end of the time-window or by doing a search across all observations being processed rather than only those within a particular YAML's scope. This can result in significant differences in final assigned ob-error between UFO and GSI, with UFO assigning lower ob-errors. I have seen cases in satwinds where the UFO's final assigned ob-error is 30–40% less than that of the GSI, and these cases are brought into alignment with GSI when the UFO's time-window is artificially extended to ingest all of the obs on the front-end that GSI usually ingests and UFO usually doesn't. This is currently an irreconcilable difference based on differences in approach between UFO and GSI and will likely show up outside of satwinds wherever the duplicate ob down-weighting is used.

@azadeh-gh
Copy link

  1. MHS exception for error inflation based on the GSI's MHS scattering index check in qcmod.f90 . Bennartz Scattering QC check is used in JEDI YAML file, instead of MHS scattering index error inflation, so the number of obs passing QC in JEDI are less than GSI and the final obs errors are higher in GSI vs JEDI.
    The QC flowchart and validation plots can be found in MHS Obs Validation #45

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
Status: Summary issues
Development

No branches or pull requests

3 participants