Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Objectives API: Create new binary / multiclass pipeline classes and remove objectives from pipeline classes #405

Merged
merged 34 commits into from Mar 2, 2020

Conversation

angela97lin
Copy link
Contributor

@angela97lin angela97lin commented Feb 20, 2020

As part of #346, we need to split our pipelines into binary and multiclass pipelines.

Closes #404, #348

(Note: renamed branch to include issue number)
(Closes #348 by making predict_proba return the full output regardless of binary and multiclass and handling the dimensional logic in score instead.)

@angela97lin angela97lin changed the title As part of #346, we need to split our pipelines into binary and multiclass pipelines. [WIP] Creating new binary / multiclass pipeline classes Feb 20, 2020
@codecov
Copy link

codecov bot commented Feb 20, 2020

Codecov Report

No coverage uploaded for pull request base (improved_objectives@db948a1). Click here to learn what that means.
The diff coverage is 100%.

Impacted file tree graph

@@                  Coverage Diff                   @@
##             improved_objectives     #405   +/-   ##
======================================================
  Coverage                       ?   97.37%           
======================================================
  Files                          ?      110           
  Lines                          ?     3317           
  Branches                       ?        0           
======================================================
  Hits                           ?     3230           
  Misses                         ?       87           
  Partials                       ?        0
Impacted Files Coverage Δ
...components/transformers/imputers/simple_imputer.py 100% <ø> (ø)
...s/components/estimators/regressors/rf_regressor.py 100% <ø> (ø)
...mponents/estimators/regressors/linear_regressor.py 100% <ø> (ø)
evalml/pipelines/__init__.py 100% <ø> (ø)
...onents/estimators/regressors/catboost_regressor.py 100% <ø> (ø)
...ents/estimators/classifiers/catboost_classifier.py 100% <ø> (ø)
...eature_selection/rf_classifier_feature_selector.py 100% <ø> (ø)
...components/estimators/classifiers/rf_classifier.py 100% <ø> (ø)
...components/transformers/scalers/standard_scaler.py 100% <ø> (ø)
...nents/estimators/classifiers/xgboost_classifier.py 100% <ø> (ø)
... and 40 more

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update db948a1...e55a2c6. Read the comment docs.

@angela97lin angela97lin changed the title [WIP] Creating new binary / multiclass pipeline classes [WIP] Creating new binary / multiclass pipeline classes and remove objectives from pipeline classes Feb 21, 2020
@angela97lin angela97lin changed the title [WIP] Creating new binary / multiclass pipeline classes and remove objectives from pipeline classes Create new binary / multiclass pipeline classes and remove objectives from pipeline classes Feb 21, 2020
evalml/pipelines/pipeline_base.py Outdated Show resolved Hide resolved
requirements.txt Outdated Show resolved Hide resolved
@codecov
Copy link

codecov bot commented Feb 25, 2020

Codecov Report

Merging #405 into improved_objectives will increase coverage by 0.11%.
The diff coverage is 99.23%.

Impacted file tree graph

@@                   Coverage Diff                   @@
##           improved_objectives     #405      +/-   ##
=======================================================
+ Coverage                97.42%   97.53%   +0.11%     
=======================================================
  Files                      111      113       +2     
  Lines                     3376     3448      +72     
=======================================================
+ Hits                      3289     3363      +74     
+ Misses                      87       85       -2
Impacted Files Coverage Δ
...s/components/estimators/regressors/rf_regressor.py 100% <ø> (ø) ⬆️
...eature_selection/rf_classifier_feature_selector.py 100% <ø> (ø) ⬆️
...components/estimators/classifiers/rf_classifier.py 100% <ø> (ø) ⬆️
...components/transformers/scalers/standard_scaler.py 100% <ø> (ø) ⬆️
evalml/automl/auto_classification_search.py 100% <ø> (ø) ⬆️
...onents/estimators/regressors/catboost_regressor.py 100% <ø> (ø) ⬆️
...nents/estimators/classifiers/xgboost_classifier.py 100% <ø> (ø) ⬆️
evalml/automl/auto_regression_search.py 100% <ø> (ø) ⬆️
...components/transformers/encoders/onehot_encoder.py 100% <ø> (ø) ⬆️
...ml/pipelines/classification/catboost_multiclass.py 100% <ø> (ø) ⬆️
... and 29 more

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update de414a6...ead5225. Read the comment docs.

@angela97lin
Copy link
Contributor Author

angela97lin commented Feb 25, 2020

@dsherry RE: suggestions for tests:

  • Try to fit and predict with a binary classification pipeline, but don't pass an objective and expect it to error out --> to discuss: is this what we want it to do? It should still be valid to just fit and predict using the estimator?
  • Call fit/predict with regression or multiclass without providing objective arg, expect it to work --> done :)
  • Call fit/predict with an invalid objective arg --> not really sure what you were thinking about the expected error for this? currently, get_objectives would just error out
  • Pass empty objective list to scores --> added a test (test_score_with_empty_list_of_objectives in test_pipelines.pybut this currently just tests that if score gets an empty list, it raises an IndexError. Were you thinking about raising our own error instead?
  • Pass invalid objective names to scores --> same as above, get_objectives would just error
  • We need coverage somewhere that pipelines subclassing each of the three new pipeline base classes work, i.e. we can call fit, predict, predict_proba, score, feature_importance, describe, etc. --> Not sure if this is what you mean, but all of the pipelines we currently have in place subclass and call these methods :o Or do you mean we should create our own custom pipeline class that subclasses each of these three classes and check that all the methods work?

@dsherry
Copy link
Collaborator

dsherry commented Feb 27, 2020

@angela97lin thanks for response. Should I re-review?

  • Try to fit and predict with a binary classification pipeline, but don't pass an objective and expect it to error out --> to discuss: is this what we want it to do? It should still be valid to just fit and predict using the estimator?

So for binary classification, we need to pick a classification threshold during fit (BinaryClassificationObjective.optimize_threshold), and then we need a decision function to use the threshold with during predict (BinaryClassificationObjective.decision_function). And both of those things come from the objectives, right? We could A) require an objective as input there or B) use classification accuracy by default when someone didn't pass in an objective. Other options? I like A more because the behavior is more transparent.

  • Call fit/predict with an invalid objective arg --> not really sure what you were thinking about the expected error for this? currently, get_objectives would just error out

Yep, I just mean expecting that error.

  • Pass empty objective list to scores --> added a test (test_score_with_empty_list_of_objectives in test_pipelines.pybut this currently just tests that if score gets an empty list, it raises an IndexError. Were you thinking about raising our own error instead?

What do you think? Would it be more clear to throw our own error? If so I'm a fan

  • We need coverage somewhere that pipelines subclassing each of the three new pipeline base classes work, i.e. we can call fit, predict, predict_proba, score, feature_importance, describe, etc. --> Not sure if this is what you mean, but all of the pipelines we currently have in place subclass and call these methods :o Or do you mean we should create our own custom pipeline class that subclasses each of these three classes and check that all the methods work?

I guess I meant providing test coverage for the former. Like, we should have a test for each of those methods for BinaryClassificationPipeline, and for the other kinds

@angela97lin angela97lin changed the title Create new binary / multiclass pipeline classes and remove objectives from pipeline classes Objectives API: Create new binary / multiclass pipeline classes and remove objectives from pipeline classes Feb 28, 2020
@angela97lin
Copy link
Contributor Author

angela97lin commented Feb 28, 2020

So for binary classification, we need to pick a classification threshold during fit (BinaryClassificationObjective.optimize_threshold), and then we need a decision function to use the threshold with during predict (BinaryClassificationObjective.decision_function). And both of those things come from the objectives, right? We could A) require an objective as input there or B) use classification accuracy by default when someone didn't pass in an objective. Other options? I like A more because the behavior is more transparent.

Hm, I was actually thinking about keeping an implementation closer to what we have today, which is if there's no objective specified (/it doesn't need fitting) then we use the pipeline estimator's predict method: https://github.com/FeatureLabs/evalml/blob/master/evalml/pipelines/pipeline_base.py#L200

What do you think? Would it be more clear to throw our own error? If so I'm a fan

I think for now, IndexError makes sense, but did add a separate custom exception ObjectiveNotFoundError custom exception for accessing objectives invalidly

I guess I meant providing test coverage for the former. Like, we should have a test for each of those methods for BinaryClassificationPipeline, and for the other kinds

Yup, all our previous class tests fit into this category since they now subclass BinaryClassificationPipeline, MulticlassificationPipeline, and RegressionPipeline :)

Yes, if you could take another quick final look before I merge it into the improved_objectives feature branch, that'd be fantastic!

else:
return objective.predict(y_predicted_proba)

return self.estimator.predict(X_t)
Copy link
Collaborator

@dsherry dsherry Mar 2, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This makes sense for now -- you've moved this code over from where it used to be in ObjectiveBase, if I'm following correctly. But I think before we merge the feature branch back to master, we should update BinaryClassificationPipeline.predict() to always use a classification threshold. I'd guess that'll be coming in a later PR though, right?

dsherry
dsherry approved these changes Mar 2, 2020
Copy link
Collaborator

@dsherry dsherry left a comment

👍

@angela97lin angela97lin merged commit 829af78 into improved_objectives Mar 2, 2020
2 checks passed
@dsherry dsherry mentioned this pull request Mar 4, 2020
angela97lin added a commit that referenced this pull request Apr 13, 2020
* Objectives API: Create new binary / multiclass pipeline classes and remove objectives from pipeline classes (#405)

* Objectives API: Remove ROC and confusion matrix as objectives (#422)

* Change `score` output to return one dictionary (#429)

* Create binary and multiclass objective classes  (#504)

* Update dependencies  (#412)

* Hide features with zero importance in plot by default (#413)

* Update dependencies check: package whitelist (#417)

* Add fixes necessary for docs to build for improved objectives project (#605)

* Remove calculating plot metrics from AutoML  (#615)
@angela97lin angela97lin deleted the 404_separated_pipelines branch Apr 17, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants