-
Notifications
You must be signed in to change notification settings - Fork 89
Move predict_proba
and predict
tests regarding string / categorical targets to test_pipelines.py
#972
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Codecov Report
@@ Coverage Diff @@
## main #972 +/- ##
==========================================
+ Coverage 99.67% 99.87% +0.20%
==========================================
Files 174 174
Lines 8990 9043 +53
==========================================
+ Hits 8961 9032 +71
+ Misses 29 11 -18
Continue to review full report at Codecov.
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@angela97lin This looks good to me! Just had a couple of non-blocking questions.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@angela97lin nice! I ran this locally and it took 23sec total. Not bad given that we're trying each combo of estimator and target type for both binary and multiclass! Sounds good to me 😁
Updates test added in #951 by moving
predict_proba
andpredict
tests regarding string / categorical targets totest_pipelines.py
.Also adds some new pyfixtures which should be useful for creating pipelines for every available estimator in evalml. I wanted to use this to ensure that no particular estimator had some weird output that would break the code, and think it would be useful for scenarios where we want to do more comprehensive testing with pipelines.