[Question/Feature Request] Any explainability output or examples available to try out #130
Labels
Abhishek-eBook
Label your issue with this and try to win a free copy of Abhishek's edbook
enhancement
New feature or request
Feature request
What is the expected behavior?
No behaviour changes. Rather add examples to the docs or the examples section of the repo.
What is motivation or use case for adding/changing the behavior?
Make it easy to allow users to adapt it into their ML workflow. As explainability is an important topic in the current atmosphere.
How should this be implemented in your opinion?
No implementation needed, just docs and examples either as a python code snippet, a Jupyter notebook or a Kaggle kernel will be sufficient.
Are you willing to work on this yourself?
yes
The text was updated successfully, but these errors were encountered: