Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Move predict_proba and predict tests regarding string / categorical targets to test_pipelines.py #972

Merged
merged 9 commits into from Jul 27, 2020

Conversation

angela97lin
Copy link
Contributor

@angela97lin angela97lin commented Jul 23, 2020

Updates test added in #951 by moving predict_proba and predict tests regarding string / categorical targets to test_pipelines.py.

Also adds some new pyfixtures which should be useful for creating pipelines for every available estimator in evalml. I wanted to use this to ensure that no particular estimator had some weird output that would break the code, and think it would be useful for scenarios where we want to do more comprehensive testing with pipelines.

@angela97lin angela97lin added this to the July 2020 milestone Jul 23, 2020
@angela97lin angela97lin self-assigned this Jul 23, 2020
@codecov
Copy link

codecov bot commented Jul 23, 2020

Codecov Report

Merging #972 into main will increase coverage by 0.20%.
The diff coverage is 100.00%.

Impacted file tree graph

@@            Coverage Diff             @@
##             main     #972      +/-   ##
==========================================
+ Coverage   99.67%   99.87%   +0.20%     
==========================================
  Files         174      174              
  Lines        8990     9043      +53     
==========================================
+ Hits         8961     9032      +71     
+ Misses         29       11      -18     
Impacted Files Coverage Δ
evalml/tests/automl_tests/test_automl.py 100.00% <ø> (ø)
evalml/tests/conftest.py 100.00% <100.00%> (ø)
evalml/tests/pipeline_tests/test_pipelines.py 100.00% <100.00%> (+0.86%) ⬆️
evalml/automl/automl_search.py 99.55% <0.00%> (+0.44%) ⬆️
.../automl_tests/test_automl_search_classification.py 100.00% <0.00%> (+0.45%) ⬆️
evalml/tests/component_tests/test_components.py 100.00% <0.00%> (+0.61%) ⬆️
...ests/automl_tests/test_automl_search_regression.py 100.00% <0.00%> (+1.06%) ⬆️
evalml/utils/gen_utils.py 100.00% <0.00%> (+2.53%) ⬆️
evalml/tests/component_tests/test_utils.py 100.00% <0.00%> (+3.57%) ⬆️
... and 1 more

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update f172ff5...3f99d73. Read the comment docs.

@angela97lin angela97lin marked this pull request as ready for review Jul 24, 2020
Copy link
Contributor

@freddyaboulton freddyaboulton left a comment

@angela97lin This looks good to me! Just had a couple of non-blocking questions.

evalml/tests/pipeline_tests/test_pipelines.py Show resolved Hide resolved
evalml/tests/conftest.py Show resolved Hide resolved
evalml/tests/conftest.py Show resolved Hide resolved
Copy link
Collaborator

@dsherry dsherry left a comment

@angela97lin nice! I ran this locally and it took 23sec total. Not bad given that we're trying each combo of estimator and target type for both binary and multiclass! Sounds good to me 😁

@angela97lin angela97lin merged commit 967073c into main Jul 27, 2020
2 checks passed
@angela97lin angela97lin mentioned this pull request Jul 31, 2020
@angela97lin angela97lin deleted the 951_move_tests branch Sep 24, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants