Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[ENH] cleaned up probabilistic forecasting tests for quantile and interval predictions #4393

Merged
merged 15 commits into from Apr 1, 2023

Conversation

fkiraly
Copy link
Collaborator

@fkiraly fkiraly commented Mar 25, 2023

This PR cleans up probabilistic forecasting tests for quantile and interval predictions, i.e., test_predict_interval and test_predict_quantiles.

It makes the two tests consistent, and ensures both check the expected index and columns, which they previously did not.

Minor additions in this PR connected to the tests:

Depends on #4399, as fixes of bugs detected through this are in #4399

@fkiraly fkiraly added module:forecasting forecasting module: forecasting, incl probabilistic and hierarchical forecasting module:tests test framework functionality - only framework, excl specific tests enhancement Adding new functionality labels Mar 25, 2023
@yarnabrina
Copy link
Collaborator

Continuing what I said here, does it make sense to change from this:

if isinstance(y_train, pd.Series):
expected = pd.MultiIndex.from_product([["Quantiles"], [alpha]])
else:
# multiply variables with all alpha values
expected = pd.MultiIndex.from_product([y_train.columns, [alpha]])

to the effect of this?

expected_columns = ["Quantiles"] if isinstance(y_train, pd.Series) else y_train.columns
expected_quantiles = [alpha] if isinstance(alpha, float) else alpha
expected = pd.MultiIndex.from_product([expected_columns, expected_quantiles])

And then add at least a list element in here:

TEST_ALPHAS = [0.05, 0.1]

For example:

TEST_ALPHAS = [0.05, 0.1, [0.25, 0.75]]

Please let me know what you think.

@fkiraly
Copy link
Collaborator Author

fkiraly commented Mar 25, 2023

yes, that's better! Much clearer, and adds a test case with multiple alphas.

I'd be happy if you branch off this PR and make the change. Or, suggest the change (using the GitHub graphical web interface) and I'll accept it.

@yarnabrina
Copy link
Collaborator

I'd be happy if you branch off this PR and make the change. Or, suggest the change (using the GitHub graphical web interface) and I'll accept it.

Done. See #4394.

fkiraly pushed a commit that referenced this pull request Mar 26, 2023
…tors (#4391)

Fixes #4386 

Before this PR, `predict_interval` or `predict_quantiles` of `ForecastX` did not pass `coverage` or `alpha` parameters passed by user to the corresponding methods of `self.forecaster_y_`. This PR addresses this bug.

##### before

```pycon
>>> pipe.predict_interval(fh=fh, X=X_test.drop(columns=columns), coverage=0.95)
          Coverage              
               0.9              
             lower         upper
1960  69583.430473  70587.653223
1961  69569.814972  70576.864647
1962  72161.834476  73168.900067
>>> pipe.predict_quantiles(fh=fh, X=X_test.drop(columns=columns), alpha=[0.25, 0.75])
         Quantiles              
              0.05          0.95
1960  69583.430473  70587.653223
1961  69569.814972  70576.864647
1962  72161.834476  73168.900067
```

##### after

```pycon
>>> pipe.predict_interval(fh=fh, X=X_test.drop(columns=columns), coverage=0.95)
          Coverage              
              0.95              
             lower         upper
1960  69487.239242  70683.844453
1961  69473.352959  70673.326660
1962  72065.370939  73265.363604
>>> pipe.predict_quantiles(fh=fh, X=X_test.drop(columns=columns), alpha=[0.25, 0.75])
         Quantiles              
              0.25          0.75
1960  69879.645730  70291.437965
1961  69866.864086  70279.815532
1962  72458.888285  72871.846258
```

There are two other methods of `ForecastX` which does not use passed parameters, viz. `cov` in `_predict_var` and `marginal` in `_predict_proba`. However, these are not used even in `BaseForecaster`. I've made changes (similar to above two) so that these parameters are always passed, whether or not used by any estimator.

Tests coverage will be added by other PR, see discussion there:

* #4393
* #4394
@fkiraly
Copy link
Collaborator Author

fkiraly commented Mar 26, 2023

Great, thanks!

I will merge in this sequence:

First I'll update from main and ensure the tests pass here.
They have already failed with ForecastX, so I'd expect they now run after the fix #4391 is merged.

If that is the case, we merge to main and update #4394 from main and check whether the extended tests still run.

fkiraly added a commit that referenced this pull request Apr 1, 2023
…4393 (#4399)

This PR fixes a number of unreported probabilistic prediction bugs detected through #4393:

* `TransformedTargetForecaster` would return colums of a prediction interval forecast in unexpected order. This is fixed by reordering.
@fkiraly fkiraly merged commit 17eaa66 into main Apr 1, 2023
21 checks passed
@fkiraly fkiraly deleted the fixed-proba-tests branch April 1, 2023 08:18
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement Adding new functionality module:forecasting forecasting module: forecasting, incl probabilistic and hierarchical forecasting module:tests test framework functionality - only framework, excl specific tests
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants