New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[ENH] cleaned up probabilistic forecasting tests for quantile and interval predictions #4393
Conversation
Continuing what I said here, does it make sense to change from this: sktime/sktime/forecasting/tests/test_all_forecasters.py Lines 406 to 410 in e75cfc4
to the effect of this? expected_columns = ["Quantiles"] if isinstance(y_train, pd.Series) else y_train.columns
expected_quantiles = [alpha] if isinstance(alpha, float) else alpha
expected = pd.MultiIndex.from_product([expected_columns, expected_quantiles]) And then add at least a list element in here:
For example: TEST_ALPHAS = [0.05, 0.1, [0.25, 0.75]] Please let me know what you think. |
yes, that's better! Much clearer, and adds a test case with multiple alphas. I'd be happy if you branch off this PR and make the change. Or, suggest the change (using the GitHub graphical web interface) and I'll accept it. |
Done. See #4394. |
…tors (#4391) Fixes #4386 Before this PR, `predict_interval` or `predict_quantiles` of `ForecastX` did not pass `coverage` or `alpha` parameters passed by user to the corresponding methods of `self.forecaster_y_`. This PR addresses this bug. ##### before ```pycon >>> pipe.predict_interval(fh=fh, X=X_test.drop(columns=columns), coverage=0.95) Coverage 0.9 lower upper 1960 69583.430473 70587.653223 1961 69569.814972 70576.864647 1962 72161.834476 73168.900067 >>> pipe.predict_quantiles(fh=fh, X=X_test.drop(columns=columns), alpha=[0.25, 0.75]) Quantiles 0.05 0.95 1960 69583.430473 70587.653223 1961 69569.814972 70576.864647 1962 72161.834476 73168.900067 ``` ##### after ```pycon >>> pipe.predict_interval(fh=fh, X=X_test.drop(columns=columns), coverage=0.95) Coverage 0.95 lower upper 1960 69487.239242 70683.844453 1961 69473.352959 70673.326660 1962 72065.370939 73265.363604 >>> pipe.predict_quantiles(fh=fh, X=X_test.drop(columns=columns), alpha=[0.25, 0.75]) Quantiles 0.25 0.75 1960 69879.645730 70291.437965 1961 69866.864086 70279.815532 1962 72458.888285 72871.846258 ``` There are two other methods of `ForecastX` which does not use passed parameters, viz. `cov` in `_predict_var` and `marginal` in `_predict_proba`. However, these are not used even in `BaseForecaster`. I've made changes (similar to above two) so that these parameters are always passed, whether or not used by any estimator. Tests coverage will be added by other PR, see discussion there: * #4393 * #4394
Great, thanks! I will merge in this sequence:
First I'll update from main and ensure the tests pass here. If that is the case, we merge to main and update #4394 from main and check whether the extended tests still run. |
This PR cleans up probabilistic forecasting tests for quantile and interval predictions, i.e.,
test_predict_interval
andtest_predict_quantiles
.It makes the two tests consistent, and ensures both check the expected index and columns, which they previously did not.
Minor additions in this PR connected to the tests:
ForecastX
, which would surface the bug [BUG]ForecastX
does not pass parameters to proba prediction methods #4386 (and covers its fix)BaggingForecaster
andDynamicFactor
have a brokenpredict_interval
, this is a known issue, see [BUG]DynamicFactor
returns incorrect index for probabilistic forecasts #4362 and [BUG]BaggingForecaster
returns incorrect index for probabilistic forecasts #4363. They also need to be excepted from the updatedtest_predict_interval
which also would surface the same bug now.Depends on #4399, as fixes of bugs detected through this are in #4399