New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feat/stochastic inputs #833
Conversation
Codecov Report
@@ Coverage Diff @@
## master #833 +/- ##
==========================================
- Coverage 91.33% 91.32% -0.02%
==========================================
Files 69 69
Lines 6869 6872 +3
==========================================
+ Hits 6274 6276 +2
- Misses 595 596 +1
Continue to review full report at Codecov.
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good, thanks :) I had a question about a potential additional option for randomly sampling the quantiles
preds = [model.predict(series=stochastic_series, n=10) for _ in range(2)] | ||
|
||
# random samples should differ | ||
self.assertFalse(np.alltrue(preds[0].values() == preds[1].values())) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is nitpicking but isn't there a slim chance that the predictions will be identical? :D we could add a seed to make sure they are not.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, normally there is a seed earlier in the file which I believe should apply here :)
@@ -1176,6 +1176,28 @@ def values(self, copy=True, sample=0) -> np.ndarray: | |||
else: | |||
return self._xa.values[:, :, sample] | |||
|
|||
def random_component_values(self, copy=True) -> np.array: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What do you think about an additional optional to randomly sample a quantile per sample?
I would assume this is similar to quantile regression based on stochastic input (without needing QuantileRegression as a likelihood)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh, you mean returning the exact quantile instead of sampling a sample at random? That's an interesting idea :) But would require a bit of adaptation elsewhere as we would still need to ensure this is applied on the output targets only, and not e.g. on the inputs or covariates. Let's keep the idea though.
Handle stochastic series for
fit()
andpredict()
in Torch-based models.When fitting: a sample is drawn from the series uniformly at random (instead of taking always the 1st sample, as before). This way the training procedure can naturally be carried over the stochastic samples.
When predicting: one sample is drawn from the series uniformly at random. Unfortunately at the moment it would be quite complicated to batch the forward passes over all of a series' samples, so each call to
predict()
will only return the forecast related to this one sample. However, callingsample()
many times and concatenating the results allows to get stochastic forecasts even with deterministic models (not using a likelihood) when predicting stochastic series (or using stochastic covariates). Here's an example: