Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[ENH] pytest conditional fixtures #1839

Merged
merged 26 commits into from Jan 31, 2022
Merged

[ENH] pytest conditional fixtures #1839

merged 26 commits into from Jan 31, 2022

Conversation

fkiraly
Copy link
Collaborator

@fkiraly fkiraly commented Jan 6, 2022

This PR contains a plugin for pytest that allows the user to easily specify conditional/nested fixtures over combinations of variables.

The utility is in utils._testing._conditional_fixtures, the function conditional_fixtures_and_names. It takes conditional fixture generation functions and allows arbitrary nesting of conditionals (as long as they are non-circular).

As a proof-of-concept of how this simplifies things, I've used it to refactor test_all_estimators.

This will allow us to specify conditional fixtures in the future, and pytest_generate_tests will always be the same. This will also unlock easy inheritance of tests when wrapped into classes.

Closely related to discussion in #1819, especially the part about the loops at the end. The conditional fixtures utility allows to easily specify the loops, by moving modular conditional functions around.

@fkiraly
Copy link
Collaborator Author

fkiraly commented Jan 6, 2022

FYI @Lovkush-A, related to our design discussion.

I think this the "right" form of the loops.

Co-authored-by: Lovkush <lovkush@gmail.com>
@fkiraly fkiraly added the module:tests test framework functionality - only framework, excl specific tests label Jan 7, 2022
@lmmentel
Copy link
Contributor

lmmentel commented Jan 9, 2022

@fkiraly would you able able to give a more accessible explanation of what this PR is trying to achieve, fix, enhance?

@fkiraly
Copy link
Collaborator Author

fkiraly commented Jan 9, 2022

Sure. From the docstring, an example, which may help:

    Example: we want to loop over two fixture variables, "number" and "multiples"
        "number" are integers from 1 to 10,
        "multiples" are multiples of "number" up to "number"-squared
        we then write a generator_dict with two entries
        generator_dict["number"] is a function (test_name, **kwargs) -> list
            that returns [1, 2, ..., 10]
        generator_dict["multiples"] is a function (test_name, number, **kwargs) -> list
            that returns [number, 2* number, ..., number*number]

In pytest, how would you do that? You would do a mark.parameterize, and manually generate the combination of fixtures.
If you have more than two fixture variables, this gets tedious. So we decompose this into conditional fixtuers, and write only the logic that generates fixtures for variable i given fixtures for variables i-1, i-2, ..., 1. The rest is done by the framework (see docstring for the syntax).

chrisholder
chrisholder previously approved these changes Jan 19, 2022
@fkiraly fkiraly added this to Must have in Release v0.14.0 Jan 19, 2022
@fkiraly
Copy link
Collaborator Author

fkiraly commented Jan 24, 2022

@fkiraly would you able able to give a more accessible explanation of what this PR is trying to achieve, fix, enhance?

now in #1922 which describes the "target state", hope that is accessible...

lmmentel
lmmentel previously approved these changes Jan 30, 2022
Copy link
Contributor

@lmmentel lmmentel left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't really get what's happening in get_fixtures or how to test it, I trust that it's doing something useful.

sktime/tests/test_all_estimators.py Show resolved Hide resolved
sktime/tests/test_all_estimators.py Outdated Show resolved Hide resolved
sktime/utils/_testing/_conditional_fixtures.py Outdated Show resolved Hide resolved
sktime/utils/_testing/_conditional_fixtures.py Outdated Show resolved Hide resolved
sktime/utils/_testing/_conditional_fixtures.py Outdated Show resolved Hide resolved
@fkiraly
Copy link
Collaborator Author

fkiraly commented Jan 30, 2022

I don't really get what's happening in get_fixtures or how to test it, I trust that it's doing something useful.

it's just a wrapper around an invocation of generator_dict values (which are functions generating fixture lists and optionally names), and adds names if these are not returned by that function.

I'll add clarification to the docstring.

@fkiraly fkiraly dismissed stale reviews from lmmentel and chrisholder via 163aded January 30, 2022 21:36
@fkiraly
Copy link
Collaborator Author

fkiraly commented Jan 31, 2022

approvals from @chrisholder, @lmmentel up to non-blocking comments which have been addressed (I hope) - merging

@fkiraly fkiraly merged commit 4068c1c into main Jan 31, 2022
@fkiraly fkiraly deleted the pytest-conditional-fixtures branch January 31, 2022 08:48
fkiraly added a commit that referenced this pull request Jan 31, 2022
PR that uses scenarios from #1819 for refactoring and simplifying the tests.
Refactoring is carried out for `test_all_estimators` only, which signposts how the refactor for remaining tests would look like.

The high-level structure is as follows:

* tests now rely additionaly rely on an additional type of fixture, "scenarios", which encode data that is passed to estimators, and methods that are sequentially called. Having multiple scenarios allow to test multiple sequences of data being passed as a formulaic test case across estimators, e.g., "passing `fh` in `fit` and no `X`" or "passing `fh` only in `predict` and `X` and `y` that is multivariate". See `_utils.testing.scenarios`. Uses #1819 (scenarios)
* not all scenarios are applicable to all estimators (e.g., classifiers need `X` and `y` to be Panel/vector; some forecasters must be given `fh` in `fit`), hence this is reliant on some machinery to generate fixture combinations, namely estimator/scenario combinations. See `tests.test_all_estimators.pytest_generate_tests`. Uses #1839 (conditional fixture generation).
* in the process of slightly more stringent testing, some bugs were discovered in individual estimators, which are being fixed as far it requires for the refactor to run. See #1846.
* The test suite currently runs only 1:1 remaps of "pre-refactor" tests, while containing more scenarios. These could be run as well, but would cause a lot more estimators to fail, since these are scenarios that weren't stringently tested before. The larger set of scenarios can be easily switched on and off, by controlling how the `pre-refactor` tag oof scenarios is handled.

The tests all pass, but this includes a few new escapes:

* `STLTransformer`, `STLforecaster`, and `ConditionalDeseasonalizer`, due to ongoing, unresolved discussion on #1677, #1773
* `StackingForecaster` on `test_predict_time_index_with_X`. Funny things seem to happen if `X` is passed, of unclear cause so far.
@fkiraly fkiraly moved this from Must have to Done in Release v0.14.0 Jan 31, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
module:tests test framework functionality - only framework, excl specific tests
Projects
Development

Successfully merging this pull request may close these issues.

None yet

4 participants