Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Refactor tests to avoid using importorskip #3126

Merged
merged 4 commits into from
Dec 7, 2021

Conversation

freddyaboulton
Copy link
Contributor

Pull Request Description

Fixes #2922


After creating the pull request: in order to pass the release_notes_updated check you will need to update the "Future Release" section of docs/source/release_notes.rst to include this pull request by adding :pr:123.

@freddyaboulton freddyaboulton self-assigned this Dec 6, 2021
@codecov
Copy link

codecov bot commented Dec 6, 2021

Codecov Report

Merging #3126 (9fd8f5b) into main (30b2a6a) will increase coverage by 0.1%.
The diff coverage is 100.0%.

Impacted file tree graph

@@           Coverage Diff           @@
##            main   #3126     +/-   ##
=======================================
+ Coverage   99.8%   99.8%   +0.1%     
=======================================
  Files        315     315             
  Lines      30684   30711     +27     
=======================================
+ Hits       30593   30620     +27     
  Misses        91      91             
Impacted Files Coverage Δ
.../automl_tests/test_automl_search_classification.py 100.0% <100.0%> (ø)
...ests/automl_tests/test_automl_search_regression.py 100.0% <100.0%> (ø)
evalml/tests/automl_tests/test_automl_utils.py 100.0% <100.0%> (ø)
...lml/tests/automl_tests/test_iterative_algorithm.py 100.0% <100.0%> (ø)
...l/tests/automl_tests/test_pipeline_search_plots.py 100.0% <100.0%> (ø)
...alml/tests/component_tests/test_arima_regressor.py 100.0% <100.0%> (ø)
.../tests/component_tests/test_catboost_classifier.py 100.0% <100.0%> (ø)
...l/tests/component_tests/test_catboost_regressor.py 100.0% <100.0%> (ø)
...alml/tests/component_tests/test_lgbm_classifier.py 100.0% <100.0%> (ø)
...valml/tests/component_tests/test_lgbm_regressor.py 100.0% <100.0%> (ø)
... and 21 more

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 30b2a6a...9fd8f5b. Read the comment docs.

@@ -0,0 +1,8 @@
# if there are more than 0 lines with importorskip, go into the if branch
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Will prevent us from using importorskip in the future:

image

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Really cool idea, thank you for adding this as well as fixing all of the current references to importorskip! 😁

Copy link
Contributor

@angela97lin angela97lin left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Really awesome work @freddyaboulton! LGTM, I just had some nit-picky comments about:

  1. The message for our noncore_dependency marker
  2. Having to import in each test we want to use the module. I wonder if there's a way to create some fixture for the file to get around that, but I'm not sure?

Thanks for doing this and adding a lint script to prevent us from falling back to this pattern! 🙏

@@ -48,6 +48,7 @@ def pytest_configure(config):
"markers",
"skip_offline: mark test to be skipped if offline (https://api.featurelabs.com cannot be reached)",
)
config.addinivalue_line("markers", "noncore_dependency: mark test as slow to run")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice! Is the reason why we mark these tests because they're slow to run? Do we want to mention instead that they contain dependencies that are not part of the core requirements instead?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is what happens when you copy-paste from the pytest docs 😂 Will update the message.

@@ -0,0 +1,8 @@
# if there are more than 0 lines with importorskip, go into the if branch
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Really cool idea, thank you for adding this as well as fixing all of the current references to importorskip! 😁

@@ -272,7 +278,7 @@ def test_smotenc_categorical_features(X_y_binary):
X, y = X_y_binary
X_ww = infer_feature_types(X, feature_types={0: "Categorical", 1: "Categorical"})
snc = Oversampler()
X_out, y_out = snc.fit_transform(X_ww, y)
_ = snc.fit_transform(X_ww, y)
assert snc.categorical_features == [0, 1]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's interesting that we don't test anything with the outputs 😅

@@ -46,6 +43,8 @@ def test_sampler_selection(
categorical_columns,
mock_imbalanced_data_X_y,
):
from imblearn import over_sampling as im
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does this mean that if we want to use a non-core dependency in our tests, we now have to import it in every test that we want to use it in? 😮

I wonder if this will cause the tests to run longer, since we have to import the same modules over and over. Probably not a noticeable difference given our tests but it makes me wonder if there's a way to create a fixture or something the way that we do for our estimators, and then call that instead of importing: catboost = import_or_raise("catboost", error_msg=cb_error_msg)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the suggestion @angela97lin ! At first I thought creating the fixtures happened before the test was skipped but it turns out that it's the opposite. Which I guess makes sense. I'm creating fixtures for the commonly imported non-core dependencies in our tests. Don't think it'll run any faster, test_graphs has multiple importorskip("plotly.graph_objects"), but I think this will be great for reducing code duplication.

@freddyaboulton freddyaboulton changed the title Refactor test files to avoid using importorskip Refactor tests to avoid using importorskip Dec 7, 2021
@freddyaboulton freddyaboulton force-pushed the 2922-refactor-to-remove-import-or-skip branch from b9e6a68 to 9fd8f5b Compare December 7, 2021 16:03
@freddyaboulton freddyaboulton merged commit 4bfe5f4 into main Dec 7, 2021
@freddyaboulton freddyaboulton deleted the 2922-refactor-to-remove-import-or-skip branch December 7, 2021 17:26
@chukarsten chukarsten mentioned this pull request Dec 9, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Refactor tests so that they fail when a dependency that should be installed is not installed
2 participants