This is the second notebook in series of exploring OpenVINO™ Explainable AI (XAI):
- OpenVINO™ Explainable AI Toolkit (1/3): Basic
- OpenVINO™ Explainable AI Toolkit (2/3): Deep Dive
- OpenVINO™ Explainable AI Toolkit (3/3): Saliency map interpretation
OpenVINO™ Explainable AI (XAI) provides a suite of XAI algorithms for visual explanation of OpenVINO™ Intermediate Representation (IR) models.
Using OpenVINO XAI, you can generate saliency maps that highlight regions of interest in input images from the model's perspective. This helps users understand why complex AI models produce specific responses.
This notebook shows an example how to use OpenVINO XAI.
It depicts a heatmap with areas of interest where neural network (classification or detection) focuses before making a decision.
Example: Saliency map for flat-coated retriever
class for MobileNetV3 classification model:
The tutorial consists of the following steps:
- Run
Explainer
inAUTO
mode - Specify preprocess and postprocess functions
- Run
Explainer
inWHITEBOX
mode- Insert XAI branch to IR or PyTorch model to use updated model in own pipelines
- Run
Explainer
inBLACKBOX
mode - Advanced: add label names and use them to save saliency maps instead of label indexes
These are explainable AI algorithms supported by OpenVINO XAI:
Domain | Task | Type | Algorithm | Links |
---|---|---|---|---|
Computer Vision | Image Classification | White-Box | ReciproCAM | arxiv / src |
VITReciproCAM | arxiv / src | |||
ActivationMap | experimental / src | |||
Black-Box | AISEClassification | src | ||
RISE | arxiv / src | |||
Object Detection | White-Box | ClassProbabilityMap | experimental / src | |
Black-Box | AISEDetection | src |
This is a self-contained example that relies solely on its own code.
We recommend running the notebook in a virtual environment. You only need a Jupyter server to start. For details, please refer to Installation Guide.