New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[ENH] pytest conditional fixtures #1839
Conversation
FYI @Lovkush-A, related to our design discussion. I think this the "right" form of the loops. |
Co-authored-by: Lovkush <lovkush@gmail.com>
@fkiraly would you able able to give a more accessible explanation of what this PR is trying to achieve, fix, enhance? |
Sure. From the docstring, an example, which may help:
In |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't really get what's happening in get_fixtures
or how to test it, I trust that it's doing something useful.
it's just a wrapper around an invocation of I'll add clarification to the docstring. |
approvals from @chrisholder, @lmmentel up to non-blocking comments which have been addressed (I hope) - merging |
PR that uses scenarios from #1819 for refactoring and simplifying the tests. Refactoring is carried out for `test_all_estimators` only, which signposts how the refactor for remaining tests would look like. The high-level structure is as follows: * tests now rely additionaly rely on an additional type of fixture, "scenarios", which encode data that is passed to estimators, and methods that are sequentially called. Having multiple scenarios allow to test multiple sequences of data being passed as a formulaic test case across estimators, e.g., "passing `fh` in `fit` and no `X`" or "passing `fh` only in `predict` and `X` and `y` that is multivariate". See `_utils.testing.scenarios`. Uses #1819 (scenarios) * not all scenarios are applicable to all estimators (e.g., classifiers need `X` and `y` to be Panel/vector; some forecasters must be given `fh` in `fit`), hence this is reliant on some machinery to generate fixture combinations, namely estimator/scenario combinations. See `tests.test_all_estimators.pytest_generate_tests`. Uses #1839 (conditional fixture generation). * in the process of slightly more stringent testing, some bugs were discovered in individual estimators, which are being fixed as far it requires for the refactor to run. See #1846. * The test suite currently runs only 1:1 remaps of "pre-refactor" tests, while containing more scenarios. These could be run as well, but would cause a lot more estimators to fail, since these are scenarios that weren't stringently tested before. The larger set of scenarios can be easily switched on and off, by controlling how the `pre-refactor` tag oof scenarios is handled. The tests all pass, but this includes a few new escapes: * `STLTransformer`, `STLforecaster`, and `ConditionalDeseasonalizer`, due to ongoing, unresolved discussion on #1677, #1773 * `StackingForecaster` on `test_predict_time_index_with_X`. Funny things seem to happen if `X` is passed, of unclear cause so far.
This PR contains a plugin for
pytest
that allows the user to easily specify conditional/nested fixtures over combinations of variables.The utility is in
utils._testing._conditional_fixtures
, the functionconditional_fixtures_and_names
. It takes conditional fixture generation functions and allows arbitrary nesting of conditionals (as long as they are non-circular).As a proof-of-concept of how this simplifies things, I've used it to refactor
test_all_estimators
.This will allow us to specify conditional fixtures in the future, and
pytest_generate_tests
will always be the same. This will also unlock easy inheritance of tests when wrapped into classes.Closely related to discussion in #1819, especially the part about the loops at the end. The conditional fixtures utility allows to easily specify the loops, by moving modular conditional functions around.