You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
"description": "What's the use of sophisticated machine learning models if you can't\ninterpret them?\n\nIn fact, many industries including finance and healthcare require clear\nexplanations of why a decision is made. This tutorial covers recent\nmodel interpretability techniques that are essentials in your data\nscientist toolbox: Eli5, LIME (Local Interpretable Model-Agnostic\nExplanations) and SHAP (SHapley Additive exPlanations).\n\nYou will learn how to apply these techniques in Python on real-world\ndata science problems in order to debug your models and explain their\ndecisions.\n\nYou will also learn the conceptual background behind these techniques so\nyou can better understand when they are appropriate.\n",