New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Switch from SmartPredictor to SmartExplainer #111
Comments
Hello @chetanambi, Thanks again for your feedbacks. Glad to see that you've taken the solution in hand. #To answer your question : If you are interested, you can also contribute by forking this repository and follow the steps described in the contributing.md's file available in the repository Thanks again for your contributions. Hoping that this answer will suit you for now. The Team Shapash. |
Description of Problem: Overview of the Solution:
|
@JohannMartin When this feature will be available for use? |
Hello @chetanambi, We are currently working on a new release of Shapash and this method will be included. We've planned it for next week. Thanks again for your contributions. The Team Shapash. |
Hi Team,
Is it possible to generate local explanations on the new data added to SmartPredictor object? I went through all the tutorials and I understand that when we are satisfied with the explainability results given by Shapash, we can use the SmartPredictor object for deployment. But my question is how can we get local explanation chart after deployment on the new data that is coming in? I hope my question is clear.
Thanks,
Chetan
The text was updated successfully, but these errors were encountered: