How to handle uncertainty in future Covariates? #3891
Unanswered
BenediktPrusas
asked this question in
Q&A
Replies: 1 comment
-
Hi, this is a really good question to which I, unfortunately, don't have a good answer. Most likely, the answer heavily depends on your application / dataset / accuracy of your forecast for the unknown covariates. I would recommend evaluating 3 versions of the
I would then compare the performance of the 3 versions on some held-out data and select the one that works best for your application. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hello everyone,
amazing library so far! But right now, I am unsure how to handle uncertainty in future covariates. I want to use historic weather forecasts as future covariates, so they are evolving over time. If I only use the actual weather data, the learned probability distribution will not reflect the uncertainty in of the weather forecast and be overconfident.
So far, I thought I could create one timeseries for every training example, where each time series is only long enough to make exactly one prediction. But this isn't exactly elegant.
Similarly, I have the situation that the reported values of my time series might get corrected in the first days (e.g. failures of measurements equipment etc.), also here the model could learn to be overconfident in the reported past target values. Here I would need to add a series for every correction that occurred and make it small enough that it is exactly only one training sample long.
Are there any better ways to approach this? Cutom Datasets or DataLoaders, I would be also fine to moving to GluonTS or only using Deep Learning Models.
Thanks in advance!
Beta Was this translation helpful? Give feedback.
All reactions