-
Notifications
You must be signed in to change notification settings - Fork 3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Discussion Points for 2022/07/27 Group Meeting #28
Comments
Response from Gab: "In your context, I think the biggest benefit of using
something like CLASS (or indeed the pre-closure equivalents) would be the
observationally constrained uncertainty estimates that might somehow get
included in the analyses in the end. Unlike a spread across competing
gridded products, these uncertainty bounds represent an observationally
constrained estimate. They reflect the discrepancy between in-situ
measurements and the product (where observations do exist) and manage to
use those observations to create a spatiotemporally complete uncertainty
estimate. In reality it’s a very generous uncertainty estimate, because in
addition to accounting for observed vs gridded product mismatches, it also
implicitly includes the mismatch that comes from the different spatial
scale of in-situ (~1km2) versus gridded data (and so site region
heterogeneity plays a role). Best thought of as the expected agreement of
an unseen in-situ (think flux tower) measurement within the grid cell of
the final product and the grid cell value. Hopefully that makes sense."
And response from Sanaa Hobeichi ***@***.***) who is the person
who actually developed the products.
1. Gab is right. I’ve looked before at the biases of the datasets that
were involved in deriving Pre-DAT Rn, i.e. the pre-closure equivalent of
CLASS-Rn, and they all have large positive biases against in-situ
observations, these are CERES-EBAF, ERAI, MERRA-2 and GLDAS-Noah. Both
Pre-DAT and CLASS have inherited some of these biases. The mean bias plot
(from the CLASS paper
<https://journals.ametsoc.org/view/journals/clim/33/5/jcli-d-19-0036.1.xml>)
shows the distribution of bias across 164 flux tower sites, both positive
and negative biases in CLASS are equally distributed across sites (median
~ 0), and the magnitude of positive bias is larger than that of negative
bias.
[image: Chart, box and whisker chart Description automatically generated]
1. I think that the benefit of comparing with DOLCE V2
<https://researchdata.edu.au/derived-optimal-linear-dolce-v21/1463675>/
DOLCE
V3 <https://researchdata.edu.au/derived-optimal-linear-dolce-v30/1697055>
and
LORA is that you get to compare the datasets over a longer time period,
i.e. 29 years and 23 years in DOLCE V2/V3 and LORA respectively. My
understanding is that CMIP6 models are out of phase, which means that years
2000-2009 in CMIP6 and CLASS are not necessarily equivalent, however, over
a longer time period the comparison can be more meaningful.
…On Thu, Jun 9, 2022 at 8:39 AM nocollier ***@***.***> wrote:
Collecting discussion points for next meeting:
- Any response from Gab about CLASS low net radiation? Should we drop
DOLCE and LORA? @dlawrenncar <https://github.com/dlawrenncar>
- Last meeting, we had some unresolved discussion about the surface
soil moisture from WangMao. In particular, CESM2 and NorESM are quite wet
in high latitudes relative to the data product and we wondered if this
effect is real, especially given the caution provided in Dirmeyer2016
<https://journals.ametsoc.org/view/journals/hydr/17/4/jhm-d-15-0196_1.xml>
to combine soil moisture data carefully. Here is the dataset publication
Wang2021 <https://essd.copernicus.org/articles/13/4385/2021/>.
@jiafumao <https://github.com/jiafumao>
- For the higher dimensional soil moisture product, @ypwong22
<https://github.com/ypwong22> is working on adding models. Any updates?
- I have an initial work on an adaptation of Umakant's soil carbon
data. He has several layers of data, but two I think are particularly
useful to us: cSoilAbove1m
<https://www.climatemodeling.org/~nate/ILAMB-Test/comparisons/cSoilAbove1m.png>
and cSoil
<https://www.climatemodeling.org/~nate/ILAMB-Test/comparisons/cSoil.png>
which is the soil carbon above 3m. Do we compare separately to both?
Umakant's estimates are high relative to the other products we have and
even correlate badly with respect to the NCSCDV22. How could we check that
my coarsening strategy is reasonable?
—
Reply to this email directly, view it on GitHub
<#28>, or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AFABYVCQ6FMIVZDJX57QMULVOH6YLANCNFSM5YKKQLLA>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
From @mmu2019 Re: biomass: GEOSCARBON is different from Global.Carbon although both are products based on Saatchi's tropical forest biomass, but both are global products. GEOSCARBON is from Martin Herold in Europe. I attached readme for the original data, but they changed their website linked to the original data that I downloaded. Here is the new link: https://www.wur.nl/en/Research-Results/Chair-groups/Environmental-Sciences/Laboratory-of-Geo-information-Science-and-Remote-Sensing/Research/Integrated-land-monitoring/Forest_Biomass.htm Global.Carbon is Saatchi new product. I think this dataset has been released to the public. I got this from Saatchi through personal exchange a couple of years ago because it was not published yet. Please contact me if you have any further questions. Thanks. |
@mmu2019 found a reference for Saatchi's global dataset, we will work it back in: |
To the soil carbon question, the data description here https://bolin.su.se/data/ncscd/ for the netcdf files says it is to 3m. But they also have a figure shown that is to 1m. So perhaps we have to reach out to get both versions? |
Updates on soil carbon improvements:
|
Global.Carbon
biomass product. The reference we provide discusses tropical biomass, but this is a global dataset. If you see here though,Global.Carbon
andTropical
are different. I thought we had dropped this dataset? We need to restore GEOCARBON and Saatchi's Global dataset now has a reference.score = | 1 - error / bad_biome_error |
, clipped to be on [0,1]. I have a comparison of the new vs old scores if the quantile used to define thebad_biome_error
is changed. I also produced a CMIP5v6 comparison. The 98th quantile is a defensible choice, but leads to scores which use very little of the [0,1] range. For that reason it seems better to use a lower quantile, but which one and what principle do we use to justify the choice?The text was updated successfully, but these errors were encountered: