Skip to content
#

interpretability-and-explainability

Here are 19 public repositories matching this topic...

Visualization methods to interpret CNNs and Vision Transformers, trained in a supervised or self-supervised way. The methods are based on CAM or on the attention mechanism of Transformers. The results are evaluated qualitatively and quantitatively.

  • Updated Jan 17, 2023
  • Python

TraceFL is a novel mechanism for Federated Learning that achieves interpretability by tracking neuron provenance. It identifies clients responsible for global model predictions, achieving 99% accuracy across diverse datasets (e.g., medical imaging) and neural networks (e.g., GPT).

  • Updated Aug 28, 2024
  • Python

Improve this page

Add a description, image, and links to the interpretability-and-explainability topic page so that developers can more easily learn about it.

Curate this topic

Add this topic to your repo

To associate your repository with the interpretability-and-explainability topic, visit your repo's landing page and select "manage topics."

Learn more