This issue was moved to a discussion.
You can continue the conversation there. Go to discussion →
DISCUSSION: comparison for uninitialized forecast #352
Labels
You can continue the conversation there. Go to discussion →
Here, I am thinking out loud about uninitialized skill. Does the comparison argument make sense here?
Why I think about this now? I started comparing monthly skill from other MPIESM initialized ensembles.
So far I have been using it with a comparison keyword: I first construct an uninitialized ensemble, and the pipe that into the same machinery as I did for init skill.
What this means for perfect-models in climpred for the comparison
m2e
, I then compare the uninitialized ensemble mean to every uninitialized member. (CURRENTLY) So I am comparing uninitialized forecast against uninitialized verification, asking how well can an uninitialized member forecast another uninitialized member. (ALTERNATIVE) Another way of doing would be to that the same verification members as I use for the initialized skill, asking how well can an uninitialized member forecast an initialized member. The second option sounds closer to what it done in Hindcasts.The text was updated successfully, but these errors were encountered: