-
Notifications
You must be signed in to change notification settings - Fork 87
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update CatBoost estimators to return self in fit rather than underlying model for consistency #1701
Conversation
Codecov Report
@@ Coverage Diff @@
## main #1701 +/- ##
=========================================
+ Coverage 100.0% 100.0% +0.1%
=========================================
Files 242 242
Lines 19055 19072 +17
=========================================
+ Hits 19047 19064 +17
Misses 8 8
Continue to review full report at Codecov.
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This looks good. I was trying to see if we had any tests that locked down what an estimator child class returns for it's methods, and couldn't find any right off the bat. Shoot me a message on slack if that exists. I think it's important that we have some tests for these functions, like feature_importance() that I think need to return numpy arrays but we either don't check for it or test for it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM! Nice catch!
@chukarsten I think you're right in that our other tests don't explicitly check that, for example, the return type of |
Updates CatBoost estimators to return self in fit rather than underlying model for consistency. Our other estimators return self, catboost should too.
This fell out of work I've been doing w/ components and pipelines returning Woodwork and general cleanup of EvalML 😛