Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Train a categorical model #39

Open
raspstephan opened this issue Mar 23, 2021 · 5 comments
Open

Train a categorical model #39

raspstephan opened this issue Mar 23, 2021 · 5 comments

Comments

@raspstephan
Copy link
Owner

It might make sense to also look into a Metnet-style categorical model. I am pretty sure that this would work. It would be a) a good baseline and b) a good fallback option if we never get a GAN to train.

I would like to try sampling from the probability output with a correlated random field as well. This might actually end up looking quite realistic and, I suspect, hard to beat score-wise with a GAN.

@raspstephan raspstephan added this to To do in NWP downscaling Mar 23, 2021
@raspstephan
Copy link
Owner Author

I got the first working end-to-end version going. Training a categorical net, then sampling from it with a 2D correlated random field. So far, I have not actually trained it on the full data and with enough bins but here is what this approximately looks like.

image

image

Next steps are: 1) Copy changes into src; 2) Train on full data with more bins; 3) Evaluate it; 4) Improve net by adding more variables, etc.

@raspstephan raspstephan moved this from To do to In progress in NWP downscaling Mar 31, 2021
@raspstephan raspstephan moved this from In progress to To do in NWP downscaling Apr 23, 2021
@raspstephan
Copy link
Owner Author

Early results from notebooks 01 and 02. 01 with bn, 02 without. Basically no difference, two epochs seem to be enough.

image
image

Very zero heavy and a little noisy but seems reasonable.
image

Next steps: real validation + make better with more input variables.

@raspstephan
Copy link
Owner Author

Adding orography and the LSM (3 + 4) on their own doesn't really change the skill yet at all. Maybe without additional information in terms of variables this isn't enough info. I would still have expected orography to hep a little. Hmm. But maybe the uncertainty is also just way too large for this to make any impact? Let's add both and add some more variables, one by one.

@raspstephan
Copy link
Owner Author

Also, try the better loss!!!

@raspstephan
Copy link
Owner Author

Adding temperature seems to help a little (05).

image

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Development

No branches or pull requests

1 participant