-
Notifications
You must be signed in to change notification settings - Fork 89
Update argument order of objectives to align with sklearn #698
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Codecov Report
@@ Coverage Diff @@
## master #698 +/- ##
=======================================
Coverage 99.21% 99.21%
=======================================
Files 140 140
Lines 4985 4985
=======================================
Hits 4946 4946
Misses 39 39
Continue to review full report at Codecov.
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The changes look good.
Because this is a dangerous change to make, we should triple-check it. Once you merge #707 and update this PR, ping me and I'll check the branch out and we should each poke around to ensure we didn't miss any spots.
There weren't any callsites in the docs which needed to be updated?
@dsherry I didn't find anywhere in docs to update except our custom objectives notebook... which I realized was really outdated, yikes! I ended up just replacing it with our new updated version of FraudCost (since before, it was just a copied version of the old implementation lol), thus updating the docs with |
@angela97lin could you please also update this one callsite in |
Some other places to change: # i already mentioned this one above
evalml/pipelines/plot_utils.py:33: labels = unique_labels(y_predicted, y_true)
evalml/tests/objective_tests/test_standard_metrics.py:37: obj.score(y_predicted=[], y_true=[1])
evalml/tests/objective_tests/test_standard_metrics.py:39: obj.score(y_predicted=[1], y_true=[])
evalml/tests/objective_tests/test_standard_metrics.py:41: obj.score(y_predicted=[0], y_true=[1, 0])
evalml/tests/objective_tests/test_standard_metrics.py:43: obj.score(y_predicted=np.array([0]), y_true=np.array([1, 0]))
evalml/tests/objective_tests/test_standard_metrics.py:60: obj.score(y_predicted=[], y_true=[1])
evalml/tests/objective_tests/test_standard_metrics.py:62: obj.score(y_predicted=[1], y_true=[])
evalml/tests/objective_tests/test_standard_metrics.py:64: obj.score(y_predicted=[0], y_true=[1, 0])
evalml/tests/objective_tests/test_standard_metrics.py:66: obj.score(y_predicted=np.array([0]), y_true=np.array([1, 0])) Other than that, this looks good! I'll approve once updated. |
@dsherry Whoops, must have missed since it was functionally equivalent (named parameters); updated! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great to get this out of the way! 🎆
Let's wait to merge #716 first and then green tests before merging
Closes #662