New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
&SmirnGregHM [BUG] ensure forecasting tuners do not vectorize over columns (variables) #5145
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have checked, it works as expected. Are you sure it can go without the deprecation warning though?
Not entirely, that's why I would like to hear from @yarnabrina. Whether we need one hinges on how many users would think this is a bug, and how many users have actually used it as-is, accessing the If there's even a small number of the latter, we ought to run this through a deprecation cycle as it would break their code. What do you think, @SmirnGregHM? |
@fkiraly it is hard for me to estimate the number of people who would use it as it is now. Can you maybe leave something like class ForecastingGridSearchCV(...):
...
@property
def _forecasters(self):
raise AttributeError(
"`_forecasters` property was removed and the behaviour of `ForecastingGridSearchCV`"
" for multivariate series changed in 0.22.1. Access `best_params_`, `cv_results_`,"
" etc. directly from `ForecastingGridSearchCV` instead."
) Would it break something else? |
Hm, that sounds like a good compromise. I don't think it will break anything, since broadcasting doesn't happen after this PR. |
added the error message, slightly reworded and with more pointers. Review appreciated, @SmirnGregHM. Also added a credit to you to this PR due to your contributions. |
I'll prefer this to go through deprecation, possibly through a temporary (As it's probably obvious from my recent questions in Discord and Github, I don't use this yet, and please feel free to consider this with low to zero priority as just a random opinion.) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does it make sense to add examples documenting how to access best model/score/parameters for both univariate/multivariate(/ForecastX ?) cases?
(In the github discussion, I noticed if passed forecaster if ForecastX with same model for both, y and X forecasters get treated separately, which may not be obvious to users.)
@yarnabrina, point taken. FYI @SmirnGregHM I now went with a deprecation solution after all - which is close to @SmirnGregHM's original suggestion to have additional parameters. Why:
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great, @fkiraly! I only suggest you to make the warning explicitly DeprecationWarning rather than the generic UserWarning. I also did not test the recent version yet, I will test later today. Codewise all seems good.
"to version 0.24.0." | ||
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
"to version 0.24.0.",
DeprecationWarning
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
thanks, changed
…encies * origin/main: [DOC] speed-up tutorial notebooks - deep learning classifiers (sktime#5169) [ENH] fixture names in probability distribution tests (sktime#5159) [ENH] test for specification conformance of tag register (sktime#5170) &SmirnGregHM [BUG] ensure forecasting tuners do not vectorize over columns (variables) (sktime#5145) [ENH] VMD (variational mode decomposition) transformer based on `vmdpy` (sktime#5129) [ENH] Interface statsmodels MSTL - transformer (sktime#5125) [ENH] add tag for inexact `inverse_transform`-s (sktime#5166) [ENH] refactor and add conditional execution to `numba` based distance tests (sktime#5141) [MNT] move fixtures in `test_dropna` to `pytest` fixtures (sktime#5153) [BUG] prevent exception in `PyODAnnotator.get_test_params` (sktime#5151) [MNT] move fixtures in `test_reduce_global` to `pytest` fixtures (sktime#5157) [MNT] fix dependency isolation of `DateTimeFeatures` tests (sktime#5154) [MNT] lower dep bound compatibility patch - `binom_test` (sktime#5152) [MNT] test forecastingdata downloads only on a small random subset (sktime#5146) [ENH] widen scope of change-conditional test execution (sktime#5147) [DOC] update forecasting extension template on `predict_proba` (sktime#5138)
This PR introduces parameters to forecasting tuners such as
ForecastingGridSearchCV
to control whether parameters are tuned overall, or for instances or variables.The current behaviour is retained for the default, which is tuning per variable and for all instances.
As this is somewhat inconsistent, this PR also starts a deprecation period with an end state where the default is a single parameter tuned for all instances and variables.
Fixes #5143, see there for a discussion why this is a (logic) bug - the solution is now to move to the most intuitive setting as a default, but leave the option to tune by instance or variable to an user, and expose this cleaerly in the docstrings.
Also extends the tests as follows:
best_params
in all relevant tests