Description
A waterfall-style chart showing how each feature contributes to pushing a model prediction from a base value (expected output) to the final predicted value. Bars extend left (negative SHAP value) or right (positive SHAP value), stacking cumulatively. This is a core ML explainability visualization complementing the existing SHAP summary plot.
Applications
- Explaining individual predictions in credit scoring models
- Debugging unexpected model outputs in healthcare ML
- Communicating feature impact to non-technical stakeholders
- Regulatory compliance (model explainability requirements)
Data
feature (str) — feature names
shap_value (float) — SHAP contribution per feature
base_value (float) — expected model output
final_value (float) — actual prediction
- Size: 10–20 features typical
Notes
- Features ordered by absolute SHAP value magnitude
- Cumulative bar segments from base_value to final_value
- Color: red for positive, blue for negative contributions
- Show base value and final prediction as reference lines
Description
A waterfall-style chart showing how each feature contributes to pushing a model prediction from a base value (expected output) to the final predicted value. Bars extend left (negative SHAP value) or right (positive SHAP value), stacking cumulatively. This is a core ML explainability visualization complementing the existing SHAP summary plot.
Applications
Data
feature(str) — feature namesshap_value(float) — SHAP contribution per featurebase_value(float) — expected model outputfinal_value(float) — actual predictionNotes