Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] Unexpected Error Message in the notebook #63

Closed
chhetri22 opened this issue Sep 20, 2019 · 2 comments
Closed

[BUG] Unexpected Error Message in the notebook #63

chhetri22 opened this issue Sep 20, 2019 · 2 comments

Comments

@chhetri22
Copy link
Contributor

When running the Interpretable Classification Methods notebook, if explain_global or explain_local is called on ExplainableBoostingClassifier without fitting the model first, the NotFittedError is not raised. Instead an AttributeError is raised.

Similarly, when running the Interpretable Regression Methods notebook, if explain_global or explain_local is called on ExplainableBoostingRegressor without fitting the model first, the NotFittedError is not raised. Instead an AttributeError is raised.
interpret_error

@mikewlange
Copy link

Hi. I think I can shed some light. Couple solutions for you @chhetri22 .

  1. It's not really bug. If you don't fit your model, you will not have an attribute_set as the attribute set is the core of the entire explain process. It's more of a 'if you're enumerating an attribute_set that does not exist, it's an error', alert. My guess is due to the fact this error function is used in many scenarios and does get triggered in explain_global as you found out, why added where it's not needed? less is more.if I were Microsoft I would not change the code. But very good find. You can QA for me any time.

But you can change the code really easy like. It's one line.

  1. In the case you describe, you would add
    check_is_fitted(self, "has_fitted_")
    before this line
    for attribute_set_index, attribute_set in enumerate(self.attribute_sets_):
    and after this line
    def explain_global(self, name=None):
    inside of this funtion
def explain_global(self, name=None):
        if name is None:
            name = gen_name_from_class(self)

        # Obtain min/max for model scores
        lower_bound = np.inf
        upper_bound = -np.inf
        for attribute_set_index, attribute_set in enumerate(self.attribute_sets_):
            errors = self.model_errors_[attribute_set_index]
            scores = self.attribute_set_models_[attribute_set_index]

            lower_bound = min(lower_bound, np.min(scores - errors))
            upper_bound = max(upper_bound, np.max(scores + errors))
  1. I'm guessing you installed it like the instructions say pip install -U interpret or though Anaconda itself. However, and this will help you in the future, this is a platform not a little library. The distinct projects and languages. When you see that, you build these from the source. Especially with pypy. Grnted, it's not as bad as node, bu ti's not conda.

  2. If this type of thing concerns you enough to bring to light. You should be helping them fix their 20 year old HEP code at cern. Honestly, you would lose your mind. And people wonder (people = me) why we have made only a single significant discovery compared to it's ability to. But that's another story...

Cheers.

@interpret-ml
Copy link
Collaborator

Thanks @chhetri22 & @mikewlange - this should no longer be an issue.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

No branches or pull requests

3 participants