New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[BUG] pass user passed parameters to ForecastX
to underlying estimators
#4391
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Excellent, thanks!
This should hopefully fix it.
Let us know if you also want to try to fix the tests (or find out why this was not detected), I won't merge until you had the chance so you can also do it in this PR if you want.
general rationale: every bugfix should optimally be covered by a test, so we would see if we accidentally reintroduce the bug later. In this case, it's up to you if you want to add that test, if not, one of the developers will add it later. Where, "adding" probably means fixing the existing test that was supposed to check for this but didn't, see the discussion in #4386. |
I've been working on the probabilistic forecasting tests as well - I think this covers it now: #4393, so it should find the bug while this is not merged. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
merged with update from main, still aprpove
I found why the tests didn't capture the issue. It's because ForecastX(forecaster_X=DirectReductionForecaster(X_treatment='shifted',
estimator=LinearRegression(),
pooling='global',
window_length=3),
forecaster_y=DirectTabularRegressionForecaster(estimator=LinearRegression(),
window_length=3)) So, I think the tests should ensure that types of estimator do vary over different iterations, as currently all 8 cases use this same estimator. Edit: Sorry, just saw #4393. So, I guess this is addressed there completely? |
Also, for both coverage and quantiles, tests are parameterized on sktime/sktime/forecasting/tests/test_all_forecasters.py Lines 406 to 415 in 8332ece
And the above code really expects a non-list |
Apologies for crossing over here.
I see - not sure what the reason is, the test might be left over as a refactor from the time where we did not allow multiple Yes, we should indeed add list-like We may have to think in which sequence we merge things - you could branch off my changes in #4393. |
…d `test_predict_quantiles`/`alpha` in forecaster contract tests (#4394) Updates `test_predict_interval` and `test_predict_quantiles` fixtures for `coverage` and `alpha`, adds list input case. Note: These tests fail pre-#4391 which fix some bugs detected through the extended coverage, but started passing after #4391 wa merged. Verified locally and via CI.
Reference Issues/PRs
Fixes #4386
What does this implement/fix? Explain your changes.
Before this PR,
predict_interval
orpredict_quantiles
ofForecastX
did not passcoverage
oralpha
parameters passed by user to the corresponding methods ofself.forecaster_y_
. This PR addresses this bug.fkiraly's comment:
Test coverage will be added by other PR, see discussion there:
test_predict_interval
/coverage
andtest_predict_quantiles
/alpha
in forecaster contract tests #4394(FK comment end)
before
after
Does your contribution introduce a new dependency? If yes, which one?
No.
What should a reviewer concentrate their feedback on?
There are two other methods of
ForecastX
which does not use passed parameters, viz.cov
in_predict_var
andmarginal
in_predict_proba
. However, these are not used even inBaseForecaster
. I've made changes (similar to above two) so that these parameters are always passed, whether or not used by any estimator.Did you add any tests for the change?
No.
Any other comments?
The setup for the code snippets shared above is as follows:
Setup Codes
PR checklist
For all contributions
For new estimators