Skip to content
This repository has been archived by the owner on Dec 6, 2023. It is now read-only.

raise AttributeError if predict_proba is not available #98

Merged
merged 1 commit into from Nov 2, 2016

Conversation

kmike
Copy link
Contributor

@kmike kmike commented Oct 30, 2016

In scikit-learn when predit_proba method is not available, AttributeError is raised instead of NotImplementedError. In this PR:

  • classifiers are changed to follow the same convention;
  • removed predict_log_proba mentions because lightning doesn't provide this method;
  • added more tests for predict_proba results.

@kmike
Copy link
Contributor Author

kmike commented Oct 30, 2016

Python 2.7 and Python 3.5 tests are failing on Travis, but failures don't look related to the PR; Python 3.4 passes on Travis.

@fabianp
Copy link
Member

fabianp commented Oct 30, 2016

I've relaunched Travis, lets see what happens

@kmike
Copy link
Contributor Author

kmike commented Oct 30, 2016

weird, it now fails with the same error as https://travis-ci.org/scikit-learn-contrib/lightning/jobs/171807463, but in a different environment

@fabianp
Copy link
Member

fabianp commented Oct 30, 2016

OK I fixed a bug that I introduced in my recent pull request, lets see if now tests pass

@fabianp
Copy link
Member

fabianp commented Oct 30, 2016

all tests pass now :-)

@fabianp
Copy link
Member

fabianp commented Oct 30, 2016

This looks good to me. I'll give @mblondel some time to give his opinion.

Thanks for your contribution!

@@ -52,7 +59,7 @@ def predict_proba(self, X):
prob += 1
prob /= 2
else:
raise NotImplementedError("predict_(log_)proba only supported when"
raise NotImplementedError("predict_proba only supported when"
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've kept NotImplementedError here as a defensive measure, but probably it is better to remove it.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why do you think it would be better to remove it?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Because code flow should never get to this exception - AttributeError is raised earlier. If for some reason _predict_proba is used instead of predict_proba then with unsupported loss function an exception is still raised (prob variable is not defined); it is harder to interpret that NotImplementedError though. So maybe keeping it is fine :)

@mblondel
Copy link
Member

LGTM.

Could you also take a look at #18?

@fabianp
Copy link
Member

fabianp commented Nov 2, 2016

2 LGTM = merge :-)

@fabianp fabianp merged commit 8f11fc7 into scikit-learn-contrib:master Nov 2, 2016
@kmike
Copy link
Contributor Author

kmike commented Nov 2, 2016

Thanks!

@mblondel sorry; I may take a look at it at some point in future, but not right now.

@mblondel
Copy link
Member

mblondel commented Nov 2, 2016

Sure, no worries. I was just trying to motivate someone to fix a long-standing issue :b

@fabianp
Copy link
Member

fabianp commented Nov 2, 2016

@mblondel : are you OK if I make a quick release ASAP?

lopuhin added a commit to TeamHG-Memex/eli5 that referenced this pull request Nov 2, 2016
Try to look inside estimators that wrap other estimators:
predict_proba can be a property in wrapped estimator but a method
in wrapper estimator, so the AttributeError will be raised in
runtime. But we don't want to catch it in runtime because it might
be unrelated. This fixes a failure that appeared after
scikit-learn-contrib/lightning#98 merge
(see https://travis-ci.org/TeamHG-Memex/eli5/jobs/172586529)
@mblondel
Copy link
Member

mblondel commented Nov 2, 2016

@fabianp Sure, thanks for doing this!

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants