Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

AMV/Satwind Obs Validation #40

Closed
4 tasks done
CoryMartin-NOAA opened this issue Aug 18, 2022 · 23 comments
Closed
4 tasks done

AMV/Satwind Obs Validation #40

CoryMartin-NOAA opened this issue Aug 18, 2022 · 23 comments
Assignees
Labels
GDAS HofX QC UFO Issues associated with UFO acceptance testing

Comments

@CoryMartin-NOAA
Copy link
Contributor

CoryMartin-NOAA commented Aug 18, 2022

Need to validate satellite based winds/AMV observations from all platforms

This issue will track progress on validation of AMVs in UFO for GDAS in four stages:

  • HofX computation using GSI produced GeoVaLs
  • QC count checks (and YAML changes) using GSI produced GeoVaLs
  • Obs errors using GSI produced GeoVaLs
  • HofX computation in FV3-JEDI

Note this will need to include any near-surface correction applied to the observation/hofx value.

@CoryMartin-NOAA CoryMartin-NOAA added UFO Issues associated with UFO acceptance testing GDAS HofX QC labels Aug 18, 2022
@BrettHoover-NOAA
Copy link

Running Cory's GDASApp (importing GSI grid into JEDI to remove interpolative differences in hofx) and EVA scripts on a yaml file designed for satwind QC, including all criteria in the ewok/jedi-gdas satwind yaml file located here. The test completes with some level of disagreement between JEDI and GSI on which observations are accepted and which are rejected. You can see these disagreements in the assignment of errors when GSI rejects an observation (assigns impossibly large errors) and JEDI doesn't:

errordiff_vs_pressure_2021080100_satwind_northward_wind
This plot shows that there are disagreements throughout the troposphere on when errors should be assigned impossibly large (GSI rejected), including some other cases around 800 hPa where JEDI assigns an error much larger than GSI does.

Looking more closely at the u-component (eastward_wind), I found that when GSI passes (GSIeffectiveQC=0) and JEDI does not, JEDI's rejection flag is either 12 or 17. These correspond to either a Bounds Check (12) or a diffref (17) check, where MetaData is too far from reference. In my test data, I have the following breakdown of agreement or disagreement between JEDI and GSI's rejection criteria on satwinds:

Screen Shot 2022-08-23 at 5 26 42 PM

@BrettHoover-NOAA
Copy link

BrettHoover-NOAA commented Aug 23, 2022

Focusing on the JEDI reject / GSI accept observations, nearly all of them (95%) are from a flag=12 (Bounds Check) failure in JEDI. I decided to try and crawl through these criteria to find the observations where disagreements are happening.

There are numerous Bounds Check filters in the yaml file, some of which are very difficult to simulate from the diag file. Of the ones I could easily simulate, I found that the most common disagreements that are being captured with my tests are coming from one particular filter:

Screen Shot 2022-08-23 at 5 54 03 PM

This is a check on GOES IR (245), EUMET IR (253), and JMA IR (252) satwinds to reject when pressure is outside of a 400 to 800 hPa range, based on the wording in the yaml filter:

# GOES IR (245), EUMET IR (253), JMA IR (252) reject when pressure between 400 and 800 mb.
  - filter: Bounds Check
    filter variables:
    - name: eastward_wind
    - name: northward_wind
    test variables:
    - name: MetaData/air_pressure
    minvalue: 40000
    maxvalue: 80000
    where:
    - variable:
        name: ObsType/eastward_wind
      is_in: 245, 252, 253
    action:
      name: reject

However, the wording of the note on this filter points in the opposite direction, implying that the obs should be rejected if they are within the 400–800 hPa range. Checking this filter's settings against GSI code, I see no references to a 400 hPa cutoff but I do see references to an 800 hPa cutoff for Type 245 winds on Lines 1403-1404 of read_satwnd.f90, but the cutoff goes in the opposite direction, rejecting observations at pressures above 800 hPa rather than below it:

              if( ppb >= 800.0_r_kind .and. ree >0.55_r_kind) then
                 qm=15

I'm wondering if this filter is set up incorrectly, or is the comment attached to it implying something it shouldn't? It seems like the maxvalue=80000 should correctly exclude Type=245 observations with pressures greater than 800 hPa, which matches what I see in the GSI. But the fact that so many of the rejection disagreements are showing up in this filter makes me think that something is possibly reversed here.

@CoryMartin-NOAA
Copy link
Contributor Author

@BrettHoover-NOAA I think I agree with you (and it's worth at least trying), that YAML suggests that if it is between 400-800 hPa and is 245,252,253, to reject the obs. I think there are other 'actions' to try, but I don't know them off the top of my head. Perhaps 'accept' instead of 'reject'?

@BrettHoover-NOAA
Copy link

I found the range operation in GSI, in setupw.f90 Lines 963–967:

        if(itype ==245 ) then
           if( presw >399.0_r_kind .and. presw <801.0_r_kind) then  !GOES IR  winds
              error=zero                          !  no data between 400-800mb
           endif
        endif

So it looks like this filter is definitely operating in reverse, excluding what should be retained and retaining what should be excluded. Also, it looks like the filter operates a little differently for Type 252 winds, excluding between 500-800 hPa, while 400–800 is used again for Type 253 winds, setupw.f90 Lines 968–975:

        if(itype == 252 .and. presw >499.0_r_kind .and. presw <801.0_r_kind) then  ! JMA IR winds
           error=zero
        endif
        if(itype == 253 )  then
           if(presw >401.0_r_kind .and. presw <801.0_r_kind) then  ! EUMET IR winds
              error=zero
           endif
        endif

@BrettHoover-NOAA
Copy link

I split this 400–800 pressure filter into 3 filters, applying the exact threshold values to each of the three observation types (245, 252, 253). At @CoryMartin-NOAA's suggestion I replaced the Bounds Check filter with the Perform Action filter, which will execute the reject action on any observation within the bounds, rather than those outside of it:

old filter:

  # GOES IR (245), EUMET IR (253), JMA IR (252) reject when pressure between 400 and 800 mb.
  - filter: Bounds Check
    filter variables:
    - name: eastward_wind
    - name: northward_wind
    test variables:
    - name: MetaData/air_pressure
    minvalue: 40000
    maxvalue: 80000
    where:
    - variable:
        name: ObsType/eastward_wind
      is_in: 245, 252, 253
    action:
      name: reject

new filters:

  # GOES IR (245) reject when pressure between 399 and 801 mb.
  - filter: Perform Action
    filter variables:
    - name: eastward_wind
    - name: northward_wind
    where:
    - variable: MetaData/air_pressure
      minvalue: 39900
      maxvalue: 80100
    where:
    - variable: ObsType/eastward_wind
      is_in: 245
    action:
      name: reject
  # JMA IR (252) reject when pressure between 499 and 801 mb.
  - filter: Perform Action
    filter variables:
    - name: eastward_wind
    - name: northward_wind
    where:
    - variable: MetaData/air_pressure
      minvalue: 49900
      maxvalue: 80100
    where:
    - variable: ObsType/eastward_wind
      is_in: 252
    action:
      name: reject
  # EUMETSAT IR (253) reject when pressure between 401 and 801 mb.
  - filter: Perform Action
    filter variables:
    - name: eastward_wind
    - name: northward_wind
    where:
    - variable: MetaData/air_pressure
      minvalue: 40100
      maxvalue: 80100
    where:
    - variable: ObsType/eastward_wind
      is_in: 253
    action:
      name: reject

It looks like this works, bringing GSI and JEDI into much greater agreement for these satwind types.

@BrettHoover-NOAA
Copy link

I am working on code to drill down into the different rejection criteria to find the source of the remaining disagreements. I have some python code that will draw in variables from the diag files and some very limited geovals data that I interpolate to the observations with MetPy.interpolate.interpolate_to_points(). I am trying to identify the observations that should be rejected by each filter, and then run those observations through a function to produce a contingency table of the GSI/JEDI QC outcomes:

Obs that pass both GSI and JEDI QC
Obs that fail GSI QC and pass JEDI QC
Obs that pass GSI QC and fail JEDI QC
Obs that fail both GSI and JEDI QC

A successful test should put all of the obs into only the top-most or bottom-most contingency field, and any observations that fall into the middle two categories are suspicious disagreements between GSI and JEDI. Results are very nearly (but not entirely) equal for both u-component and v-component winds, so I will focus on the u-component for now.

If we perform a contingency table check on all satwinds regardless of whether they fit into a rejection criteria or not, there is broad disagreement between GSI and JEDI, with only about 56% of satwinds in agreement and the remainder in disagreement:

all rejection criteria
    Total Obs: 1229931
     Flagged obs pass both GSI and JEDI QC 21.51% (264550)
     Flagged obs fail GSI and pass JEDI QC 23.42% (288013)
     Flagged obs pass GSI and fail JEDI QC 20.19% (248363)
     Flagged obs fail both GSI and JEDI QC 34.88% (429005)

If I focus on only those winds that should be caught by a JEDI filter, there is very broad agreement between GSI and JEDI QC. For example, all of the pressure-level filters are in perfect agreement, e.g.:

reject Type (245) inside of 399-801 hPa
    Total Obs: 45735
     Flagged obs pass both GSI and JEDI QC 0.00% (0)
     Flagged obs fail GSI and pass JEDI QC 0.00% (0)
     Flagged obs pass GSI and fail JEDI QC 0.00% (0)
     Flagged obs fail both GSI and JEDI QC 100.00% (45735)

The log-normal vector departure (LNVD) check on a subset of satwind types is also clean:

reject Type (244,247,257,258,259,260) LNVD>3.0
    Total Obs: 31214
     Flagged obs pass both GSI and JEDI QC 0.00% (0)
     Flagged obs fail GSI and pass JEDI QC 0.00% (0)
     Flagged obs pass GSI and fail JEDI QC 0.00% (0)
     Flagged obs fail both GSI and JEDI QC 100.00% (31214)

Some filters catch no observations at all and are effectively not applied, these include the gross checks of 130 m/s maximums for both components of the wind as well as the wind speed, and a check on a subset of satwind types constraining them to pressures less than 600 hPa.

There are two filters that show disagreements. First, the filter that rejects satwinds with more than a 50 degree difference with the model background, excluding observations with a u-component of less than 3.5 m/s:

reject direction diff > 50 & u>3.5
    Total Obs: 17276
     Flagged obs pass both GSI and JEDI QC 2.43% (419)
     Flagged obs fail GSI and pass JEDI QC 1.53% (264)
     Flagged obs pass GSI and fail JEDI QC 4.65% (803)
     Flagged obs fail both GSI and JEDI QC 91.40% (15790)

Second, the filter that rejects any satwind whose pressure level is within 100 hPa of the surface, based on interpolation of the surface_pressure geoval:

reject surface_pressure - ob_pressure < 100 hPa
    Total Obs: 122579
     Flagged obs pass both GSI and JEDI QC 0.00% (0)
     Flagged obs fail GSI and pass JEDI QC 0.00% (0)
     Flagged obs pass GSI and fail JEDI QC 38.22% (46850)
     Flagged obs fail both GSI and JEDI QC 61.78% (75729)

I am performing my own interpolation of the surface_pressure geoval to observation lat/lon using MetPy.interpolate.interpolate_to_points() here, since the interpolated value used in JEDI is not available in the diag file.

@BrettHoover-NOAA
Copy link

These two filters only account for 264 obs that fail GSI QC and pass JEDI QC, and 47,643 obs that pass GSI QC and fail JEDI QC. Based on the contingency table for all satwinds, this is only 0.092% and 19.2% of the total obs in these contingency categories, respectively.

There is another filter in the JEDI satwinds.yaml file that I have not checked, which rejects any observations 50 hPa above the (estimated) tropopause:

  # Multiple satellite platforms, reject when pressure is more than 50 mb above tropopause.
  - filter: Difference Check
    filter variables:
    - name: eastward_wind
    - name: northward_wind
    reference: TropopauseEstimate@ObsFunction
    value: MetaData/air_pressure
    minvalue: -5000                    # 50 hPa above tropopause level, negative p-diff
    action:
      name: reject

This is described as applying to "multiple satellite platforms", though there is no satwind Type constraint being applied here, so it is being applied to all satwinds. The TropopauseEstimate function is a crude function but I don't have the details to replicate it in my code, so I can't directly test this filter. However, looking at the pressure distribution of the observations that either pass GSI and fail JEDI, or fail GSI and pass JEDI, there appears to be some pressure dependence:
contingency_table_fig01
There is an almost perfectly sharp cutoff in GSI-pass/JEDI-fail satwinds that keeps them at pressures either greater than 800 hPa or less than 400 hPa. And there is a large proportion of total satwinds at pressures less than 200 hPa that fall into GSI-fail/JEDI-pass.

@BrettHoover-NOAA
Copy link

Update from weekly check-in: we need to reconfigure a few things to make sure that the geovals file contains the tropopause pressure and make sure that both geovals and diag files contain the same obs (minor differences in inclusivity vs exclusivity of the time-window boundaries). NO interpolation should be necessary to go from geoval vars to diag vars, they are BOTH in obs-space. Will update.

@BrettHoover-NOAA
Copy link

BrettHoover-NOAA commented Sep 2, 2022

I am digging deeper into GSI QC criteria and I'm running into something I can't readily explain with the GSI 'final' assigned errors. These are the errors assigned in setupw.f90 to satwinds after all QC, and according to setupw.f90 L 1375-1385, should either be some reasonable error value for assimilated obs or set to the value of huge_single for rejected obs (code block). The use-flag muse for the observation is applied on L 1206 based on whether the inverse-error is above a threshold small value (i.e. the error is beneath a threshold large value) code block. So all observations with muse=-1 should be assigned an error of huge_single.

But when I look at my diag file, I can find numerous observations with a GsiEffectiveQC value of 1 (reject) but retain a final error value that implies that the observation was assimilated.

I plotted two subsets of GsiFinalObsError values: those with a GsiEffectiveQC value of zero (assimilated), and those with a GsiEffectiveQC value that is nonzero (rejected). I plotted a histogram across the full range of values on the left, and only for observations with errors between 0–50 m/s on the right. I see a lot of observations in the rejected subset that retain a reasonable error value, which implies muse=1 and the observation was assimilated.

Final ob error of accepted u-winds (GsiEffectiveQC==0):
GsiFinalObsError_accepted_u_winds

Final ob error of rejected u-winds (GsiEffectiveQC!=0):
GsiFinalObsError_rejected_u_winds

These reasonable error + rejected observations constitute about 28% of my rejected observations in the diag file, or about 350,000 winds. I'm concerned that I am either interpreting GsiEffectiveQC incorrectly, or there is a problem with how it is being written to the diag file and the value of GsiEffectiveQC is not representing an accepted vs rejected observation.

@CoryMartin-NOAA, do you know of an explanation for what I'm seeing here? I don't want to try to drill down any further into these acceptance criteria until I'm sure I'm interpreting the GSI QC flag correctly.

@CoryMartin-NOAA
Copy link
Contributor Author

@BrettHoover-NOAA this GsiEffectiveQC I think currently only comes from 2 things:

                    gsiqc[outdata[errname] == 1e8] = 1
                    gsiqc[outdata[(outvars[o], "GsiUseFlag")] < 0] = 1

See: https://github.com/JCSDA-internal/ioda-converters/blob/c0c89d103338e8fec5a96d598cb9075c7061888a/src/gsi-ncdiag/gsi_ncdiag.py#L834

Perhaps GSI ignores the error if the use flag is -1? I'm not sure about that.

@CoryMartin-NOAA
Copy link
Contributor Author

@BrettHoover-NOAA for your tropopause related QC, can you try for the 2021080100 cycle but change the shell script to point to:
/work2/noaa/da/cmartin/UFO_eval/data/gsi_geovals_l127/nofgat_aug2021/20220906/obs (bascially change 20220816 to 20220906 and let me know if it works. I only copied one cycle here for now, but will be running the full week tonight/tomorrow. I think you will probably need to modify your YAMLs to use tropopause_pressure@GeoVaLs instead of whatever function is there currently. This should be the value of P(tropopause) as computed by GSI.

@BrettHoover-NOAA
Copy link

BrettHoover-NOAA commented Sep 6, 2022

I appreciate the info and the updated tropopause pressure data, @CoryMartin-NOAA! I'll see if I can generate data for the test cycle.

Based on the code snippet you posted, do you know what the distinction between an ob with GsiFinalObsError=1e8 versus GsiFinalObsError=_FillValue=-3.3687953e+38 would be? It looks like the observation error is set to _FillValue if obserror=Inf (Lines 1564, 1762), but I don't know why this would ever be the case if the GSI defaults to an error value of huge_single if it can't assign an error. I have about 73,000 winds (~5.9% of the total) with a _FillValue for the final observation error. About 24% of the winds have the huge_single value of 1e8, and the rest have values assigned in a reasonable range between about 5–25 m/s. The GsiFinalObsError=_FillValue observations appear to be exclusively Type=240 satwinds, which according to this table are GOES shortwave-IR observations, and are not assimilated.

@CoryMartin-NOAA
Copy link
Contributor Author

I think the difference is _FillValue is the JEDI IODA fill value vs the huge_single value which is the GSI written out value.
This may mean that the ioda-converters python script that goes from GSI ncdiag to IODA may be incomplete/incorrect in what it assumes is good/bad.

@BrettHoover-NOAA
Copy link

I was able to get the new tropopause pressure data in geovals, replacing both the /obs and/geovals directories with their 20220906 counterparts – results look good, thanks!

@CoryMartin-NOAA
Copy link
Contributor Author

Thanks @BrettHoover-NOAA once the full week is done, I'll populate new files in 20220906 for each cycle

@BrettHoover-NOAA
Copy link

I have not included the updated land_type_index_NPOESS geovals variable into the filters yet, but looking at the geovals file the data looks good. I'll try constructing a filter for the ob-types that use the SatWindsLNVDCheck function, since in the GSI the LNVD check is paired with a check for near-surface wind rejections over non-water surfaces with varying pressure thresholds depending on the satwind type.

I have a GSI QC 'simulator' written in python that attempts to re-create the GSI's QC rejections using only the data available in the diag and geovals files. I've tested it out on our 2020080100 test data, and all 1376189 satwinds have the same accept/reject between the GSI and the simulator. From here, I can attach individual flag numbers to each rejection criteria in the simulator and further break down the GSI's rejections into its component parts. The simulator is saved in a github repo here.

The first thing that jumps out when decomposing the GSI simulator's rejections is that the rejections attributed to the qcgross check, which are partially formalized in JEDI's SatWindsSPDBCheck (though this check applies to all winds passing through setupw.f90, not just satwinds), are a substantial portion of the rejections. In the test data, there are 800746 satwinds rejected by GSI, of which the simulator attributes 41727 to the qcgross check, or about 5.2% of the total rejections. These rejections appear to be biased toward satwinds with slower wind speeds than the model background, owing to a tighter QC threshold for qcgross rejection on winds with a negative speed bias (the source of the SPDB portion of the name SatWindsSPDBCheck):
GSI_simulated_QC_spdb_qcgross
Observations with a small speed bias, particularly a small positive bias, are underrepresented in these rejections because of the QC threshold being tighter for winds with a negative speed bias.

Current implementation of the SatWindsSPDBCheck applies a single value of error_min, error_max, and maxvalue representing the QC threshold for the qcgross test:

  - filter: Bounds Check
    filter variables:
    - name: eastward_wind
    - name: northward_wind
    test variables:
    - name: SatWindsSPDBCheck@ObsFunction
      options:
        error_min: 1.4
        error_max: 20.0
    maxvalue: 1.75
    action:
      name: reject
    defer to post: true

For perfect GSI/UFO parity, this would have to be changed with different error_min, error_max, and maxvalue for each satwind type, prescribing to the values contained in the GSI global_convinfo.txt fix file. Further, the GSI's penalization of maxvalue (qcgross in the GSI) for some observation types within selected pressure ranges when spdb<0 would need to be replicated, which doesn't seem feasible with the code as it is currently written. So this part of QC rejection will likely be documented as a known asymmetry between GSI and JEDI, to be addressed at some point in the future.

I can address this by using the GSI simulator to find the observations rejected by qcgross criteria in the diag files and exclude them from the GSI/UFO parity tests. The qcgross/SatWindsSPDBCheck issue will be catalogued with known JEDI-T2O exceptions here #47.

@BrettHoover-NOAA
Copy link

@CoryMartin-NOAA: I am working backward from setupw.f90 to audit the GSI acceptance criteria in read_satwnd.f90. The decisions made in read_satwnd.f90 are passed to setupw.f90 through the quality mark variable, which is currently available in the diag files as PreQC. Satwinds with flag values of 9, 12, or 15 inherited from read_satwnd.f90 are rejected in setupw.f90.

There are numerous criteria for setting these PreQC flags in read_satwnd.f90. Should I approach this by assuming that I will have these flag-values and can set up filters to reject PreQC values of 9, 12, or 15? Or should I approach this by assuming that these flag-values will not exist and I need to construct JEDI filters to re-create the acceptance criteria in read_satwnd.f90 that establishes these flags?

@CoryMartin-NOAA
Copy link
Contributor Author

I believe my answer, @BrettHoover-NOAA, is it depends on timeline. Short term: you can assume you will have these flags set, but long term, we will want to construct the filters. To be honest, I lean on the side of not duplicating effort. So I say might as well recreate these in YAML as soon as possible. Otherwise the PreQC checks will be largely trivial to implement I think.

@BrettHoover-NOAA
Copy link

The satwinds acceptance differences between GSI and UFO are fully accounted for in the 2022080100 test data. This means that all winds are either in agreement between GSI and UFO acceptance criteria, or the differences are traceable to a known cause.

The breakdown of acceptance (disagreement) on the test data is that ~75% of observations are in agreement (both GSI and UFO pass, or both GSI and UFO fail), while the disagreements break into 4.72% of observations fail GSI and pass UFO, and 20.12% pass GSI and fail UFO:
Screen Shot 2022-09-22 at 3 10 01 PM

Focusing first on the observations that fail GSI and pass UFO (GSI-no, JEDI-yes, or GnJy observations), these break down into two groups:

  1. GnJy observations caused by qcgross test: The qcgross test, called SatWindsSPDBCheck in UFO, is not functioning correctly in JEDI. For the time-being, these observations are being set aside until a decision can be made on how to fix SatWidnsSPDBCheck. This accounts for 8729 (~13.43%) of GnJy observations.
  2. GnJy observations caused by negative input-error: Some satwind observations have an ObsError value of _FillValue=-3.3687953e+38. These observations are not passing GSI QC, although they are assigned a PreQC value of 2, indicating they have not been rejected at the read_satwnd.f90 step when they are introduced to setupw.f90. In the GSI QC simulator, these observations fail within the qcgross step because of their negative input-error. Notably, all of these observations in the test data are Type=240 satwinds, which are shortwave-IR and are not normally assimilated. So this may simply be the case of an entire ob-type being pushed through GSI that is rejected by-default. These account for 56277 (~86.57%) of GnJy observations.

Focusing on the observations that pass GSI and fail UFO (GSI-yes, JEDI-no, or GyJn observations), these are all attributable to:

  1. GyJn observations caused by UFO blacklist: All GyJn observations in the test data are rejected in UFO with flag=14, which is the flag for rejection via blacklisted observation according to QCflags.h. This blacklisting must be taking place prior to the application of satwinds filters, and therefore are not considered part of the test data for GSI/UFO acceptance.

Remaining work on satwind UFO acceptance:

  1. Current satwinds QC filter YAML file should be pushed back to GitHub, presumably to the GDASApp repo? Will need @CoryMartin-NOAA's input when he is available again. This will need future updates to address outstanding issues.
  2. 612918 (~76.54%) of GSI-rejected satwinds are rejected based on their PreQC value, which is inherited from rejection criteria upstream of setupw.f90 in read_satwnd.f90. This group needs to be diagnosed for its rejection criteria and have that criteria put into UFO in its own set of filters, rather than relying on the PreQC value for filtering.
  3. Deal with qcgross/SatWindsSPDBCheck

@BrettHoover-NOAA
Copy link

BrettHoover-NOAA commented Oct 13, 2022

Satwind UFO acceptance work has been able to generate most of the PreQC checks in YAML. Equivalents for GSI's qifn, qify and ee quality information have been duplicated in the diag file, along with the satellite zenith angle – these are necessary for many of the PreQC filters. I am currently working on including the GSI's pct1 filter, which filters out any NESDIS ob with a pct1 (CVWD, or coefficient of variation) value outside of 0.04–0.5. Including this filter appears to reduce the number of outstanding NESDIS observations to 774 total winds.

YAML filters need to be staged into pre (assign errors), prior (ob and geoval based filters), and post (hofx based filters) to successfully run, but the obs errors all show up as missing in the diag file. They are NOT missing when running GDASApp, because if they are missing they are rejected with flag=10, which we have none.

@BrettHoover-NOAA
Copy link

I've created a GDASApp fork in my own repository to add satwind.yaml to the develop branch. The current configuration of the YAML file contains all of the filters I believe I can reasonably reproduce, with some caveats, as well as essentially re-creating the satwinds portion of prepobs_errtable.global in the YAML's prior-filters to establish initial ob errors for each type.

The filters for QC are all currently commented out for a few reasons:

  1. For some reason all Type=245 (GOES LWIR AMVs) are getting rejected with either a bounds or blacklisting flag in UFO, and I don't know which filter is causing this. So I'd like to turn them on one at a time to find the offending filter.
  2. I have discovered that the GSI treats some batches of winds differently from other batches based on the originating BUFR dump subset, which is set by ObsProc and that data is unavailable to the diag file for setting filters. The different subsets for the same ob-types (e.g., a set of GOES AMVs from one group of subsets and then a set of the same type/subtype GOES AMVs from some other group of subsets) seem to be aligned in an "old-format" and "new-format" grouping, and QC can be treated differently based on which group the obs come from. This is making construction of QC filters difficult since I only have type/subtype and a few other metavariables to work with to figure out which obs go to which filter; whether the ob came from one dump subset or another is invisible to me. I'm going to talk with Iliana about how to parse this, but in the mean-time there are some sets of filters (a lot of them pertaining to GOES winds) that have to be shelved until I know more. It's odd that we would be using GOES AMVs from two different formats simultaneously, especially outside of some overlap/test period around when the new format is introduced, but some of my tinkering around with filters implies that both sets of QC are being applied to obs (some to one set of QC, some to another). So I need straight answers on that.

I need to add a filter for read_satwnd.f90 L 921–973 to filter VIIRS winds based on the value of qifn.

Another note: I have the initial errors set in the YAML file, but I intentionally cut the value of the profile settings in half for the GOES AMVS (Type 245,246,247), because in read_setupw.f90 L1014–1016 this appears to be done for these winds. However, this could be only one of two subsets (based on format), so this ties back to the unanswered question about subsets for the same AMV type/subtypes based on BUFR subset/tank-name.

@BrettHoover-NOAA
Copy link

I believe that I have closed the GSI/UFO Acceptance of satwinds following an audit of YAML filters.

The Type=245 rejections were caused by a mistake in using where statements in the YAML file, essentially treating and statements as or statements and allowing for all Type=245 winds to be rejected whether or not they met the other criteria. This has been fixed in the latest YAML file.

A contingency table comparing UFO and GSI acceptance shows that there are 68,776 winds in disagreement, all of them rejected by GSI and accepted by JEDI:
Screen Shot 2022-10-17 at 4 36 11 PM

Examining these disagreements, 11096 winds come from the qcgross/SatWindsSPDBCheck test which is not implemented in the YAML because there are likely problems with how it is being implemented. I may revisit this now that the initial errors are being explicitly defined in the YAML for each observation type. Another 56561 winds come from a missing value for the GSI's initial error, instead defaulting to a negative _FillValue. These are all Type=240 winds, which are defined as GOES shortwave IR winds, which are not assimilated according to the settings in global_convinfo.txt.

The remaining 1119 winds in disagreement are all rejections subject to the GSI's experr_norm test, which compares the ratio of a norm defined by expected error to the wind speed, and rejects any observation with a ratio >0.9:

                 experr_norm = 10.0_r_kind - 0.1_r_kind * ee   ! introduced by Santek/Nebuda 
                 if (obsdat(4) > 0.1_r_kind) then  ! obsdat(4) is the AMV speed
                    experr_norm = experr_norm/obsdat(4)
                 else
                    experr_norm = 100.0_r_kind
                 end if
                 if (experr_norm > 0.9_r_kind) qm=15 ! reject data with EE/SPD>0.9

This test accounts for all remaining disagreements, being applied to all NESDIS winds. In single-precision testing in python this test also incorrectly assigns 5 winds as rejections when they otherwise pass QC, but all of these cases have an experr_norm value of 0.90000004, which seems like a precision error.

This test cannot be included in the YAML file because it requires reference to two variables, the expected error and the wind speed, and we do not have a way to perform the math in the YAML to carry this test out. I believe that an obsfunction will have to be produced for this test, similar to the SatWindsSPDBCheck and SatWindsLNVDCheck functions. I will therefore raise this as an exception in #47

@BrettHoover-NOAA
Copy link

We are at a point where I believe we can confidently say that satwinds have been validated in UFO. Four critical obsfunctions have been introduced since last October:

  • ObsErrorFactorDuplicateCheck: Replicates the duplicate observation and error inflation check in setupw.f90
  • SatWindsSPDBCheck: Refactored to correctly apply gross-error check to satwinds as per setupw.f90
  • ObsErrorFactorPressureCheck: Replicates error inflation for obs close to vertical bounds of model as per setupw.f90
  • SatWindsErrnormCheck: Replicates the experr_norm check for satwinds as per read_satwnd.f90

The inclusion or refactoring of these obsfunctions completes the tools necessary to generate acceptance and final ob-errors in UFO that matches GSI, but there are two caveats that result in minor differences:

  1. GSI handles ob duplicate and error inflation in setupw.f90 where all wind observations are visible, which allows for a more complete search for duplicate obs across observation types. UFO is performing a similar duplicate ob check only within the satwind obs file. This can result in fewer duplicate observations being detected and small differences in assigned errors.
  2. There may be disagreements between UFO and GSI where float precision/handling causes a difference in how an obsfunction is applied, even if the function is otherwise identical and being handled in an identical way between both systems.

In the test data, 575,443 u-observations and an identical set of v-observations pass QC in GSI, while in the UFO there are 8 additional rejections resulting in 575,435 obs accepted for each variable. These 8 UFO rejections are all satwinds subject to the SatWindsErrnormCheck, and reproducing the error-norm value from satwind diag file data shows that all 8 satwinds have error-norm values of either exactly 0.9 or within +/-1.0e-07 of that value. The error-norm value of 0.9 is significant because it is the critical threshold value for acceptance or rejection based on SatWindsErrnormCheck based on the GSI's own criteria for acceptance or rejection. So the most likely conclusion is that these 8 satwinds are being correctly excluded based on SatWindsErrnormCheck but they differ from GSI only because of a float precision/handling difference in the two code-bases.

Assigned observation errors are identical within 1.0E-04 m/s for all observations, the vast majority being within 1.0E-07 m/s, with the exception of 3 observations where assigned ob-errors differ by 0.01–0.04 m/s. These 3 outlier observations have incredibly high errors, between 125.0–190.0 m/s, and can almost be treated as effectively rejected observations. All 3 of these observations are at reported pressure levels higher than the surface pressure reported in their respective geovals, which indicates that they are likely assigned to levels very close to the bottom level of the model (or beneath the bottom level?), and ob-error is being inflated to extreme levels via ObsErrorFactorPressureCheck. The differences in UFO and GSI error for these observations is large compared to other satwinds, but the percentage-difference is only roughly 0.02% because of the size of the assigned errors. These differences are likely again caused by float precision/handling differences between code-bases for these extreme cases at pressures greater than the reported surface pressure, and do not result in significant differences in assigned ob-error.

With these caveats in mind:

  • 8 observations (16 ob-variables by UFO's accounting) rejected in UFO but accepted by GSI, likely caused by float precision/handling differences with the SatWindsErrnormCheck or its GSI equivalent
  • 3 observations (6 ob-variables by UFO's accounting) expressing assigned ob-errors in UFO that differ from GSI by roughly 0.02%, likely caused by float precision/handling differences with the ObsErrorFactorPressureCheck or its GSI equivalent for extreme cases where observations are at pressures well above the reported surface pressure.

Satwinds can be considered to be in acceptance and ob-error agreement with GSI for the JCSDA-internal/ufo develop build. Issue #47 will be updated to reflect these caveats as outstanding satwind exceptions.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
GDAS HofX QC UFO Issues associated with UFO acceptance testing
Projects
Development

No branches or pull requests

2 participants