-
Notifications
You must be signed in to change notification settings - Fork 82
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Refactor tests to avoid using importorskip #3126
Conversation
Codecov Report
@@ Coverage Diff @@
## main #3126 +/- ##
=======================================
+ Coverage 99.8% 99.8% +0.1%
=======================================
Files 315 315
Lines 30684 30711 +27
=======================================
+ Hits 30593 30620 +27
Misses 91 91
Continue to review full report at Codecov.
|
@@ -0,0 +1,8 @@ | |||
# if there are more than 0 lines with importorskip, go into the if branch |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Really cool idea, thank you for adding this as well as fixing all of the current references to importorskip
! 😁
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Really awesome work @freddyaboulton! LGTM, I just had some nit-picky comments about:
- The message for our noncore_dependency marker
- Having to import in each test we want to use the module. I wonder if there's a way to create some fixture for the file to get around that, but I'm not sure?
Thanks for doing this and adding a lint script to prevent us from falling back to this pattern! 🙏
evalml/tests/conftest.py
Outdated
@@ -48,6 +48,7 @@ def pytest_configure(config): | |||
"markers", | |||
"skip_offline: mark test to be skipped if offline (https://api.featurelabs.com cannot be reached)", | |||
) | |||
config.addinivalue_line("markers", "noncore_dependency: mark test as slow to run") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice! Is the reason why we mark these tests because they're slow to run? Do we want to mention instead that they contain dependencies that are not part of the core requirements instead?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is what happens when you copy-paste from the pytest docs 😂 Will update the message.
@@ -0,0 +1,8 @@ | |||
# if there are more than 0 lines with importorskip, go into the if branch |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Really cool idea, thank you for adding this as well as fixing all of the current references to importorskip
! 😁
@@ -272,7 +278,7 @@ def test_smotenc_categorical_features(X_y_binary): | |||
X, y = X_y_binary | |||
X_ww = infer_feature_types(X, feature_types={0: "Categorical", 1: "Categorical"}) | |||
snc = Oversampler() | |||
X_out, y_out = snc.fit_transform(X_ww, y) | |||
_ = snc.fit_transform(X_ww, y) | |||
assert snc.categorical_features == [0, 1] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's interesting that we don't test anything with the outputs 😅
@@ -46,6 +43,8 @@ def test_sampler_selection( | |||
categorical_columns, | |||
mock_imbalanced_data_X_y, | |||
): | |||
from imblearn import over_sampling as im |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does this mean that if we want to use a non-core dependency in our tests, we now have to import it in every test that we want to use it in? 😮
I wonder if this will cause the tests to run longer, since we have to import the same modules over and over. Probably not a noticeable difference given our tests but it makes me wonder if there's a way to create a fixture or something the way that we do for our estimators, and then call that instead of importing: catboost = import_or_raise("catboost", error_msg=cb_error_msg)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the suggestion @angela97lin ! At first I thought creating the fixtures happened before the test was skipped but it turns out that it's the opposite. Which I guess makes sense. I'm creating fixtures for the commonly imported non-core dependencies in our tests. Don't think it'll run any faster, test_graphs
has multiple importorskip("plotly.graph_objects")
, but I think this will be great for reducing code duplication.
b9e6a68
to
9fd8f5b
Compare
Pull Request Description
Fixes #2922
After creating the pull request: in order to pass the release_notes_updated check you will need to update the "Future Release" section of
docs/source/release_notes.rst
to include this pull request by adding :pr:123
.