Skip to content

Update argument order of objectives to align with sklearn #698

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 28 commits into from
Apr 25, 2020

Conversation

angela97lin
Copy link
Contributor

Closes #662

@angela97lin angela97lin self-assigned this Apr 23, 2020
@codecov
Copy link

codecov bot commented Apr 23, 2020

Codecov Report

Merging #698 into master will not change coverage.
The diff coverage is 100.00%.

Impacted file tree graph

@@           Coverage Diff           @@
##           master     #698   +/-   ##
=======================================
  Coverage   99.21%   99.21%           
=======================================
  Files         140      140           
  Lines        4985     4985           
=======================================
  Hits         4946     4946           
  Misses         39       39           
Impacted Files Coverage Δ
...alml/objectives/binary_classification_objective.py 100.00% <100.00%> (ø)
evalml/objectives/fraud_cost.py 100.00% <100.00%> (ø)
evalml/objectives/lead_scoring.py 95.00% <100.00%> (ø)
evalml/objectives/objective_base.py 100.00% <100.00%> (ø)
evalml/objectives/standard_metrics.py 100.00% <100.00%> (ø)
evalml/pipelines/binary_classification_pipeline.py 100.00% <100.00%> (ø)
...ml/pipelines/multiclass_classification_pipeline.py 100.00% <100.00%> (ø)
evalml/pipelines/pipeline_base.py 99.51% <100.00%> (ø)
evalml/pipelines/plot_utils.py 100.00% <100.00%> (ø)
...alml/tests/objective_tests/test_fraud_detection.py 100.00% <100.00%> (ø)
... and 3 more

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update fafb0e8...6c29b8b. Read the comment docs.

@angela97lin angela97lin marked this pull request as ready for review April 24, 2020 05:29
@angela97lin angela97lin requested a review from dsherry April 24, 2020 05:30
Copy link
Contributor

@dsherry dsherry left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The changes look good.

Because this is a dangerous change to make, we should triple-check it. Once you merge #707 and update this PR, ping me and I'll check the branch out and we should each poke around to ensure we didn't miss any spots.

There weren't any callsites in the docs which needed to be updated?

@angela97lin
Copy link
Contributor Author

angela97lin commented Apr 24, 2020

@dsherry I didn't find anywhere in docs to update except our custom objectives notebook... which I realized was really outdated, yikes! I ended up just replacing it with our new updated version of FraudCost (since before, it was just a copied version of the old implementation lol), thus updating the docs with score!

@angela97lin
Copy link
Contributor Author

@dsherry Since I removed the cleanup pipeline code, codecov no longer is complaining and thus this issue is no longer blocked by #707!

@angela97lin angela97lin requested a review from dsherry April 24, 2020 19:53
@dsherry
Copy link
Contributor

dsherry commented Apr 24, 2020

@angela97lin could you please also update this one callsite in confusion_matrix? It's code I added recently. The arg order doesn't matter for the sklearn function being called but it would be great to catch all usages of the old orderings in this PR.

@dsherry
Copy link
Contributor

dsherry commented Apr 24, 2020

Some other places to change: git grep -n "y_pred[^ ]* y_true" prints out:

# i already mentioned this one above
evalml/pipelines/plot_utils.py:33:    labels = unique_labels(y_predicted, y_true)
evalml/tests/objective_tests/test_standard_metrics.py:37:        obj.score(y_predicted=[], y_true=[1])
evalml/tests/objective_tests/test_standard_metrics.py:39:        obj.score(y_predicted=[1], y_true=[])
evalml/tests/objective_tests/test_standard_metrics.py:41:        obj.score(y_predicted=[0], y_true=[1, 0])
evalml/tests/objective_tests/test_standard_metrics.py:43:        obj.score(y_predicted=np.array([0]), y_true=np.array([1, 0]))
evalml/tests/objective_tests/test_standard_metrics.py:60:        obj.score(y_predicted=[], y_true=[1])
evalml/tests/objective_tests/test_standard_metrics.py:62:        obj.score(y_predicted=[1], y_true=[])
evalml/tests/objective_tests/test_standard_metrics.py:64:        obj.score(y_predicted=[0], y_true=[1, 0])
evalml/tests/objective_tests/test_standard_metrics.py:66:        obj.score(y_predicted=np.array([0]), y_true=np.array([1, 0]))

Other than that, this looks good! I'll approve once updated.

@angela97lin
Copy link
Contributor Author

@dsherry Whoops, must have missed since it was functionally equivalent (named parameters); updated!

@angela97lin angela97lin requested a review from dsherry April 24, 2020 22:03
Copy link
Contributor

@dsherry dsherry left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great to get this out of the way! 🎆

Let's wait to merge #716 first and then green tests before merging

@angela97lin angela97lin merged commit 5b97ed0 into master Apr 25, 2020
@angela97lin angela97lin deleted the 662_objective_parameters branch April 25, 2020 17:49
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Update argument order of objectives to align with sklearn
2 participants