Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Interpret's LIME is not the same as original LIME #133

Open
Dola47 opened this issue May 30, 2020 · 0 comments
Open

Interpret's LIME is not the same as original LIME #133

Dola47 opened this issue May 30, 2020 · 0 comments
Labels
bug Something isn't working

Comments

@Dola47
Copy link

Dola47 commented May 30, 2020

Dear All,

First Question:

I use the original LIME to explain some instances of my dataset. I decided to use Interpret's LIME to explain the same instances due to the better user interface we have from Interpret. However, I am explaining the same instance, and using the same conditions the results are different. Any ideas about why the results are not the same?

Here are the results from LIME:
Results from LIME

Here are the results from Interpret LIME:
Results from LIME (1)

Some thoughts from my side:
1- Is Interpret's LIME using random seed as in LIME or not, if you are not using it then the results will be random from call to call?
2- Is Interpret's LIME by default is doing classification or not?
3- Is Interpret's LIME importing by default the default values set to the different parameters in LimeTabularExplainer and LIME's explain instance or I have to specify them?

This is how I called Interpret's LIME to be exactly under the same conditions as the LIME explainer I am using:

lime_trial = LimeTabular(predict_fn=model.explainer._build_predict_fn(), data=label_encoded_data_train_x, training_labels=label_encoded_data_train_y, categorical_features=model.explainer.categorical_indices, categorical_names=model.explainer.categorical_names, feature_selection='none', feature_names=model.explainer.features_for_lime, discretize_continuous=True, discretizer='quartile', class_names = ['NOK', 'OK'], kernel_width = None, verbose=False, explain_kwargs={'num_features': 10, 'num_samples': 5000, 'top_labels': None, 'distance_metric': 'euclidean', 'model_regressor': None})

lime_trial_xgb = lime_trial.explain_local(label_encoded_interesting_parts_x, label_encoded_interesting_parts_y, name='LIME')

Second Question:

For SHAP kernel explainer, I am not sure if I passed the nsamples parameter used in SHAP during calling the shape values, to the ShapKernel in Interpret's it will work probably or not. I see that It does not make any difference when I pass it or not, so it appears that the class is not importing the parameter correctly.

Third Question:

I know that the ebm model deals with the categorical features automatically is there a way to stop that? I prefer to process my features on my own then fit the model to the processed features directly.

Many thanks.

@paulbkoch paulbkoch mentioned this issue Jan 22, 2023
@paulbkoch paulbkoch added the bug Something isn't working label Jan 22, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Development

No branches or pull requests

2 participants